I agree. We've been assured by these skeptics that models are stochastic parrots, that progress in developing them was stalling, and that skills parity with senior developers was impossible - as well as having to listen to a type of self-indulgent daydreaming relish about the eventual catastrophes companies adopting them would face. And perhaps eventually these skeptics will turn out to be right. Who knows at this stage. But at this stage, what we're seeing is just the opposite: significant progress in model development last year, patterns for use being explored by almost every development team without widespread calamity and the first well-functioning automated workflows appearing for replacing entire teams. At this stage, I'd bet on the skeptics being the camp to eventually be forced to make the hard adjustments.
Pray tell, how has the world benefited from a flood of all these superhuman developers? Where is the groundbreaking software that is making our lives better?
It depends totally on the opening. You can be out of book and database far quicker than that for offbeat stuff, or in book far longer for popular openings.
Another distinction needs to be made between positions seen and positions played. Almost every viable position will have been seen in preparation well beyond 10 moves. But seeing them on the board is rarer.
Why lie about this when the first paragraph is explicit about its source?
> As many as 30,000 people could have been killed in the streets of Iran on Jan. 8 and 9 alone, two senior officials of the country’s Ministry of Health told TIME
It's pretty common practice if naming them will get the people who shared the info in trouble. Depends on whether you think Time is a trustworthy source I guess
They also answer might these things quite casually. The response may not come from people who know. Its probably been delegated to someone quite junior.
I once saw a survey question (on what the view was of economic outlook and exchange rates) been bounced to someone junior, who then looked to an external source which based its answer partly on the previous version of the same survey.
Saying you are laying people off because AI, makes the company look like it's innovating and embracing new technology. Saying you are laying people off because costs are up and earnings are struggling reflects bad on the image of the company and the performance of the leadership. Everyone has incentives to lie.
What we would be seeing if the AI uptake was productive would be faster GDP growth, and an uptick in the job market as they would be looking for people to leverage the AI into even more productive gains.
Companies tend to lie towards their employees in similar ways as to the public. Very often most if the organization ends up believing the lies. The few people that do not tend to keep shut or go elsewhere.
You might like the Digital ID scheme. It uses Zero Knowledge Proofs, so that one of your 'IDs' could be a simple 'Is over 18' ZKP, without involving your name or anything other detail. These are not tracked by government or possible to associate with your wider identity. This is one of the examples listed in the framework docs.
> "Unlike with a physical document, when using a digital identity, you can limit the amount of information you share to only what is necessary. For example, if you are asked to prove you are over 18, you could provide a simple yes or no response and avoid sharing any other personal details." (from https://www.gov.uk/guidance/digital-identity )
There's a huge amount of disinformation circulating about the digital ID scheme, and the government's messaging over it has been catastrophically clumsy. Which is a pity, because the system has clearly been designed with civil liberties in mind (ie defensively) and for citizens it's a serious improvement over the current system.
While great on paper, zero-knowledge-proof based systems unfortunately have a fatal flaw. Due to the fully anonymous nature of verification tokens, implementations must have safeguards in place to prevent users from intercepting them and passing them onto someone else; in practice, this will likely be accomplished by making both the authenticator and the target service mobile apps that rely on device integrity APIs. This would ultimately result in the same accessibility issues that currently plague the banking industry, where it is no longer possible to own a bank account in most countries without an unmodified, up-to-date phone and an Apple or Google account that did not get banned for redeeming a gift card.
Furthermore, if implementers are going to be required to verify users per-session rather than only once during signup, such a measure would end up killing desktop Linux (if not desktop PCs as a whole) by making it impossible for any non-locked-down platform to access the vast majority of the web.
I'm unsure how applicable these risks are here. The proofs appear to be bound to the app, which in turn is bound to the user's face/fingerprint (required to unlock it).
It's an app, and data is submitted with a tap to approve. The data is just attribute / proof pairs (eg nationality:British / true), and the bundles assembled from these pairs will differ between use cases. Nightclub proof of age would just need the 'over 18' proof, while opening a bank account would need a photo, name, address, date of birth, nationality etc. In other words, there isn't a single Digital ID. The 'ID' is just a container for a specific use. They can be reused, but they will often be single purpose or generated from the attributes saved in your wallet the moment a service requests your data. The best way to think of this is that it gives you a way to pass on your citizen data with authority, and without having to overshare.
The major problem is that no one trusts government not to abuse it and use it to track everything people do. There will be some proportion of people who trust the current government, but will be paranoid that a future government will abuse it, and there will be a proportion of people that don't trust the current government to not abuse it.
You might be able to get more trust by the government assigning a third party to audit the systems to make sure they are working as advertised, and not being abused, but you would still get people being paranoid that either the third party could be corrupted to pretend that things are okay, or that a future government would just fire them and have the system changed to track everyone anyway.
No matter what you do, you will never convince a subset of people that a system that can potentially be used to track everyone won't be abused in that way. Unfortunately, those people are most likely correct. This is why we can't have nice things :(
For the record, I thing it would be great to be able to have a trusted government issued digital ID for some purposes. I especially think it would be great to have an officially issued digital ID that could be used to sign electronic documents. My partner and I moved home recently, and it was not easy signing and exchanging legal documents electronically.
> You might be able to get more trust by the government assigning a third party to audit the systems to make sure they are working as advertised, and not being abused, but you would still get people being paranoid that either the third party could be corrupted to pretend that things are okay, or that a future government would just fire them and have the system changed to track everyone anyway.
The scheme is one step ahead of you, Auditors are required [1]. Government's role in the scheme is limited to operating the API in front of its departments which are read only and scattered (eg no central database), funding the auditors and trust registry (a Digital Verification Service public key store), and legislating. The verification work will all be done by private sector digital verification services - whichever is associated with the wallet app you've chosen. There were 227 of them last year already working for various services - we all benefit from the sector being brought under a formal regulatory framework.
The tracking you fear doesn't seem to be possible beyond what is already tracked when you open a bank account etc, but this is entirely outside the scope of the wallet's operation. It's been designed specifically to make the kind of abuse you fear impossible, at least in its current format, where government is out of the loop except as a passive reference, and the DV services are legally prevented from retaining any data without your consent. Of course that could alter in future, but as it stands the framework doesn't allow for what everyone fears it does.
reply