Hacker Newsnew | past | comments | ask | show | jobs | submit | heavyset_go's commentslogin

Outside of deeply integrated KDE apps, I'm shipping QML + Python apps and IMO it's a nice experience.

It's not at all like you're writing JS apps a la React or jQuery, JS syntax is used for expression eval at minimum, but you don't have to use it beyond some single handlers, formatting/convenience functions, or whatever.

I'm looking at a smaller app, with 5k lines of QML and 10k of Python. It's got 250 lines of JS according to tokei, and quickly poking at the QML for JS leads to an additional 200 lines, so let's double to 500 lines. That's like 3% JavaScript.

That said, modern KDE is centered around QML, UI frameworks like Kirigami are built using it, but it lacks comprehensive QML, Python, Rust, etc bindings for KDE frameworks.

At minimum, you will at some point be reading C++ docs and to use your bindings. If you get in the weeds with it, you will be writing your own bindings or wrappers eventually. If you want to avoid that, you will likely be writing C++ for more complicated applications.

It's complex, but I figured with QML Plasmoids they were going for a QML-first approach to future development, but that's been walked back in favor of compiled Plasmoids in recent releases.


How does this line compare to the Ryzen AI branded Xilinx FPGAs in newer mobile AMD APUs?

The Ryzen AI NPU is from Xilinx but it's not an FPGA BTW.

I thought the XDNA line was related to Xilinx's Versal (or Alveo, I forget) lines that use FPGA fabric?

Or maybe I'm misinterpreting press releases, as evidently Notebookcheck.net lied to me years ago :(

[1] https://www.notebookcheck.net/AMD-details-4-nm-Zen-4-Ryzen-7...


This is strictly state capitalism, and not communism, at work and I'd argue that the "state" qualifier is redundant.

If they have the runway, they can try the tried and true method of undercutting competitors until they fold, and then capture the market for themselves.

Investors have the stomach for this tactic, surely a company with the state's backing can remain solvent even longer than those funded by private investors.


I'd love to see it, but they'll be given the BYU/Huawei/etc treatment to keep the silicon cartel's margins from crashing.

The world exists outside the US too you know.

Plenty of countries gave Huawei the same treatment the US did, and the US and its allies have the weight to impose sanctions, tariffs, etc to punish consumers within their borders for daring to consider better and cheaper options.

The allies of the US all banned Huawei because the US asked them (quite forcefully) to do so.

CXMT is already under a full set of US long arm sanctions so probably only very little of their products will ever reach western markets.

However some Chinese demand will definitely be met by CXMTs product displacing western suppliers - so maybe there is a tiny bit of relief for western consumers there.


> However some Chinese demand will definitely be met by CXMTs product displacing western suppliers - so maybe there is a tiny bit of relief for western consumers there.

I recall years of hints that the affordable housing crunch would eventually be helped by developers - even tho they're only building tons of not-affordable housing.

We're five years in. No meaningful change is visible from the perspective of folks who need affordable housing.

Based on that lesson, I expect what CXMT does there to have no meaningful effect here.


> I recall years of hints that the affordable housing crunch would eventually be helped by developers - even tho they're only building tons of not-affordable housing.

If I may ask, what cities? For example, Austin has seen a 6.6% asking price decrease for 0- to 2-bedroom units [1]. The big problem is there is an absolutely massive hole, and very few places are building "enough" to make a dent.

[1] https://www.realtor.com/advice/hyperlocal/austin-rents-are-g...


The RAM market is much more commoditized than housing. Almost any increase in supply should reduce prices world-wide.

How could a subsidized housing number increase from building not-subsidized housing? That is illogical. The market rate housing will become cheaper and therefore more housing will be affordable to more people but you can’t make the number of “affordable housing” units go up by building anything else because “affordable housing” is a brand name for subsidized housing.

Ok, but BYD is everywhere

Yes, and? There are Huawei stores all over Asia, that little place where 60% of people live.

Sucks for everyone else is what I'm saying. 100% of people should be allowed access, not be preempted from it in order to protect the value of exalted tech cartels.

Doesn't matter given the current shortages.

If CXMT can fill more of China's domestic demand, that's still good news for us all.


The point is offloading ML workloads to hardware that is energy efficient, not necessarily "fast" hardware.

You want to minimize the real and energy costs at the expense of time.

Assuming NPUs don't get pulled from consumer hardware altogether, theoretically the time/efficiency trade-off gap will become smaller and smaller as time goes on.


No it doesn't, its Wayland support is a mess, its codec support is lackluster, and somehow the experience is worse when you use VA-API hardware decoding.

> - It's hard to remember IPv6 addresses. The prospect of reconfiguring all my router and firewall rules looks rather painful.

fd00::1 is pretty easy to remember. It's your network, give yourself a sane and short prefix.


That's a gripe I have with IPv6. There are too damn many special networks and addresses!

With IPv4 I can easily remember 10.0.0.0/8 and 192.168.0.0/16, but I can't remember the other one off the top of my head. (172.16.0.0/12 I think?). Multicast is 224.x.x.x/x IIRC, but definitely need to look that one up when I need it.

IPv6 has SO many special networks. Network. Public. Multicast. Link local. (Which isn't like an IPv4 link local, but apparently it can actually be on the LAN? IDK - I was just learning about it earlier today.) And every interface seems to have about 5 different addresses of each type.


Amusingly, there a lot more special IPv4 networks that you just don't know about too. e.g. Link local IPv4 is 169.254.0.0/16. It just isn't auto-configured on every IPv4 interface by default, like fe80::/10 is on IPv6 interfaces, and the TCP/IP stacks on most platforms do not enforce the link-local properties of it in IPv4 like they do in IPv6.

It's like the difference between HTML and a strictly typed language. Permissiveness and flexibility is both a blessing and a curse. As with a lot of things, which thing it is in any given situation depends greatly on the situation.


For almost all cases, there is absolutely zero need to ever remember addresses, or dealing with them directly. Give your devices proper names, and your router’s DNS will handle resolution automatically.

There is no point in your network having sequential addresses, so you don’t need DHCP; routers advertise configuration, clients know where to look for it.

IPv6 is amazing, if you let it handle connectivity without trying to micromanage it.


I think this is the big hangup. Wanting to micromanage each and every address. Instead of letting it just manage itself. Reminds me on some level of the pet vs cattle of containers and servers. Mental switch is needed. And many are resistant towards this.

One thing I've noticed is if people have spent a long time learning something they are incredibly reluctant to switch to something that no longer requires that knowledge. It's like driving an automatic car when you've already learnt to drive manual. I see this pattern everywhere and people are definitely reluctant to give up their hard-earned v4 knowledge.

Remembering IP addresses... How quaint!


Sounds like me. My concern, if one just forgets everything, how does one know if their router, firewall, etc are too permissive? Security is still my responsibility.

And one still needs to pay attention for ipv4, so what is the benefit? A simultaneous half-vigilant, half-careless stance is not workable.


What do you mean by "give your devices proper names"?

Just plain old hostnames really.

Hostnames are either in a static hosts file, which you need to distribute to your machines somehow (probably using older names or raw addresses, which you do not know, because need the names in the first place), or a DNS, and for most people the DNS is under ISP's control.

Even if you have your own DNS server out there somewhere, you still need to allow a bit of DNS hijacking from your ISP in order to receive that verification SMS and enter the code into the ISP's log-in page.

DNS is a great thing, but just too much of a pain to configure.


You forgot 127.0.0.0/8 for loopback, 100.64.0.0/10 for CG-NAT, and 203.0.113.0/24 and 0.0.0.0/8

Why do you need to remember that when you can look it up?

Important part is knowing there are special networks.


> IPv6 has SO many special networks. Network. Public. Multicast. Link local.

IPv4 has those exact same ones: link-local (169.254/16), multicast (224/4), public, private (RFC 1918).

* https://en.wikipedia.org/wiki/Reserved_IP_addresses

IPv6 is (IMHO) simpler: 2001::/32 and anything else (either link-local (fe80), multicast (ff00), and ULA (fc)). So either it starts with a "2" or an "f".


but not on the same computer. and the application does not have to figure out which one it has to use.

Yes on the same computer. Pretty much every multicast-capable host has a unicast address and has multicast groups that they join when they get an IP address. [0] Edge routers almost always have -at minimum- a global address and a "site-local" address. Any host that has multiple active interfaces can have multiple "categories" of addresses assigned to it.

You might also be unaware of the fact that network interfaces can usually be assigned multiple IPv4 addresses, just like they can be assigned multiple IPv6 addresses.

> ...the application does not have to figure out which one it has to use.

You might be surprised to learn that that's the job of the routing table on the system. Applications can influence the choices made by the system by binding to a specific source address, but the default behavior used by nearly everything is to let the system handle all that for you.

[0] You appear to be unaware that multicast addresses aren't assigned to a host. I suspect you're unaware that IPv6 removed the special-case "broadcast" address. It's now treated as what it actually is; the "all hosts" multicast address.


It was a mix of Walmart, except with higher quality brands and without groceries, and Kohl's, with some Lowe's and Best Buy mixed in.

One of the big companies making billions on Python software should step up and fund the infrastructure needed to enable PyPI package search via the CLI, like you could with `pip search` in the past.

Serious question: how important is `pip search` to your workflows? I don’t think I ever used it, back when PyPI still had an XMLRPC search endpoint.

(I think the biggest blocker on CLI search isn’t infrastructure, but that there’s no clear agreement on the value of CLI search without a clear scope of what that search would do. Just listing matches over the package names would be less useful than structured metadata search for example, but the latter makes a lot of assumptions about the availability of structured metadata!)


Not important at all now, given that it hasn't worked in a decade and I've filed it away as pointless to even consider for a workflow.

However, I get a lot of mileage out of package repository search with package managers like pacman, apt, brew, winget, chocolatey and npm.

> I think the biggest blocker on CLI search isn’t infrastructure

It's why it was shut down, the API was getting hammered and it cost too much to run at a reasonable speed and implement rate limiting or whatever.


> It's why it was shut down, the API was getting hammered and it cost too much to run at a reasonable speed and implement rate limiting or whatever.

Sort of: the original search API used a POST and was structured with XML-RPC. PyPI’s operators went to great efforts to scale it, but that wasn’t a great starting point. A search API designed around caching (like the one used on PyPI’s web UI) wouldn’t have those problems.


I upvoted you because I broadly agree with you, but search is never coming back in the API. They previously outlined the cost involved and there's no way, given how minimal the value it gives more broadly, it's coming back ant time soon. It's basically an abusive vector because of the compute cost.

Funding could help, but it still requires PyPI/Warehouse to ship and operate a new public search interface that is safe at internet scale.

They operate a public package hosting interface, how is a search one any harder?

PyPI responses are cached at 99% or higher, with less infrastructure to run.

Search is an unbounded context and does not lend itself to caching very well, as every search can contain anything


Pypi has fewer than one million projects. The searchable content for each package is what? 300 bytes? That's a 200mb index. You don't even need fancy full text search, you could literally split the query by word and do a grep over a text file. No need for elasticsearch or anything fancy.

And anyway, hit rates are going to be pretty good. You're not taking arbitrary queries, the domain is pretty narrow. Half the queries are going to be for requests, pytorch, numpy, httpx, and the other usual suspects.


I wonder how a PyPi search index could be statically served and locally evaluated on `pip search`?

PyPI servers would have to be constantly rebuilding a central index and making it available for download. Seems inefficient

Debian is somehow able to manage it for apt.

1. Debian is local first via client side cache

2. apt repositories are cryptographically signed, centrally controlled, and legally accountable.

3. apt search is understood to be approximate, distro-scoped, and slow-moving. Results change slowly and rarely break scripts. PyPI search rankings change frequently by necessity

4. Turning PyPI search into an apt-like experience would require distributing a signed, periodically refreshed global metadata corpus to every client. At PyPI’s scale, that is nontrivial in bandwidth, storage, and governance terms

5. apt search works because the repository is curated, finite, and opinionated


isn't this an incrementally updatable tree that is managed with a Merkle tree? git-like, essentially?

The install side is basically Merkle-friendly (immutable artifacts, append-only metadata, hashes, mirrors). Search isn’t. Search results are derived, subjective, and frequently rewritten (ranking tweaks, spam/malware takedowns, popularity signals). That’s more like constantly rebasing than appending commits.

You can Merklize “what files exist”; you can’t realistically Merklize “what should rank for this query today” without freezing semantics and turning CLI search into a hard API contract.


are you saying PyPi search is spammed o-O ?

Yes, it was subject to abuse so they had to shutdown the XML-RPC API

that depends on how it can be downloaded incrementally.

The searchable context for a distribution on PyPI is unbounded in the general case, assuming the goal is to allow search over READMEs, distribution metadata, etc.

(Which isn’t to say I disagree with you about scale not being the main issue, just to offer some nuance. Another piece of nuance is the fact that distributions are the source of metadata but users think in terms of projects/releases.)


> assuming the goal is to allow search over READMEs, distribution metadata, etc.

Why would you build a dedicated tool for this instead of just using a search engine? If I'm looking for a specific keyword in some project's very long README I'm searching kagi, not npm.

I'd expect that the most you should be indexing is the data in the project metadata (setup.py). That could be unbounded but I can't think of a compelling reason not to truncate it beyond a reasonable length.


You would definitely use a search engine. I was just responding to a specific design constraint.

(Note PyPI can’t index metadata from a `setup.py` however, since that would involve running arbitrary code. PyPI needs to be given structured metadata, and not all distributions provide that.)


>The searchable context for a distribution on PyPI is unbounded in the general case, assuming the goal is to allow search over READMEs, distribution metadata, etc.

Even including those, it's what? Sub-20-30GB.


How does the big white search box at https://pypi.org/ work? Why couldn’t the same technology be used to power the CLI? If there’s an issue with abuse, I don’t think many people would mind rate limiting or mandatory authentication before search can be used.

The PyPI website search is implemented using a real search backend (historically Elasticsearch/OpenSearch–style infrastructure) layered behind application logic on Python Package Index. Queries are tokenized, ranked, filtered, logged, and throttled. That works fine for humans interacting through a browser.

The moment you expose that same service to a ubiquitous CLI like pip, the workload changes qualitatively.

PyPI has the /simple endpoint that the CDN can handle.

It’s PyPI philosophy that search happens on the website and pip has aligned to that. Pip doesn’t want to make a web scraper understandably so the function of searching remains disabled


Pypi has a search interface on their public website, though?

If you really need it, they publish a dump regularly and you can query that.

For simple use cases, you have the web search, and you can curl it.


They probably don't need it. You can start a crowdfunding campaign if you do.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: