Hacker Newsnew | past | comments | ask | show | jobs | submit | romanichm12's commentslogin

What Are We Becoming, and What Price Are We Paying?


It's fascinating how VPN services have successfully marketed themselves as the ultimate privacy solution, when in reality, they are often just a middleman with full visibility into your internet traffic. How can we, as users, ever truly verify a VPN provider's claims of "no logging" or "complete privacy"? It's a promise based on trust, but why should we trust a company whose business model revolves around our data?


It's especially fascinating how such a highly technical, rather nerd-ish concept is being marketed so widely. They did a great job there.


It has lots of practical uses such as evading Netflix geo-restrictions or getting around local network restrictions, even without considering privacy.


> Because a VPN in this sense is just a glorified proxy.


What else would a proper VPN be besides a particular way to have a internet-wide proxy (as opposed to e.g. just a web proxy)?


Traditionally it’s a mechanism to connect to a private network via a public one, for example to access a corporate network from home.

But a proxy doesn’t have to be a web proxy. Socks proxies were pretty common at one time.


> How can we, as users, ever truly verify a VPN provider's claims of "no logging" or "complete privacy"?

Court orders. They might be lying to customers, but they're unlikely to lie to a court. So if a court approaches them and they respond with "we have no data", they have no data.

Okay it's never gonna be no data, they'll still supply email address, payment method, registration date and similar things, but that's not my concern.


In addition, many VPN services have been tested in court like this and have been found to do no logging, showing they can be trusted.


We place faith in algorithms and data, much like faith in a higher power, but rarely stop to consider who writes these digital 'scriptures' and what their intentions might be. Are we blindly trusting new 'gods' crafted in server rooms, not realizing that they might be as fallible—or as manipulative—as the human hands that created them?


I have to politely contest the conflation of faith and trust. While I understand that they're used synonymously in a colloquial context, you don't place faith in algorithms, you place trust in them. You KNOW that the algorithm exists, you just don't necessarily know what decisions it'll make or how. Whereas, when you place faith in something, you aren't certain that it even exists or that it'll do anything. And so it makes sense to me that someone's who able to make that leap of faith, a rather apt turn of phrase in this context, is also more able to place trust in something.


> when you place faith in something, you aren't certain that it even exists or that it'll do anything.

I have to politely disagree with this statement. To place faith in something means that you believe it to be true regardless of evidence. If you aren't certain, it is not faith but guesswork.


> If you aren't certain, it is not faith but guesswork.

Well, I've had theist friends who'd disagree with that, who've said, paraphrased, that "if your faith has no doubt, then it's not faith, it's a dogma." I can't speak to this myself, but I do find their choice to believe despite their doubt more honourable than someone's blind certainty.


The paper linked is a large-scale study comparing human-written versus AI-generated argumentative student essays. The researchers used a large corpus of essays and had them rated by human experts (teachers) using standard criteria. They also analyzed the linguistic characteristics of the generated essays https://www.linkedin.com/pulse/top-5-best-ai-essay-generator....

Here's the main findings explained in a simple way:

1) The AI they tested, called ChatGPT, can write essays that are rated higher for quality than human-written essays. This means that the AI was able to write essays that the teachers thought were better than the ones written by people.

2) The AI's writing style is different from humans. For example, it uses fewer words that show discussion or certainty, but it uses more words that turn verbs into nouns and has a greater variety of words.

3) The researchers concluded that AI models like ChatGPT and Textero.ai are better than humans at writing argumentative essays. They suggest that teachers should start thinking about how to use these AI tools in education, just like how calculators are used in math. They believe that AI can help free up time for other learning objectives.

So, to answer your question, according to this study, AI can write good essays, even better than humans in some cases. But remember, it's not just about who writes better essays. It's also about learning and understanding, and that's something humans are still very good at!


I see that there's some skepticism about running WebAssembly in containers and how it constitutes a next-gen serverless solution. It's important to note that the use of WebAssembly here is not just about the runtime environment but also about the features it brings. WebAssembly binaries can start up significantly faster than traditional VMs or containers. They also have a strong isolation model and security sandbox that allows running multiple tenants in the same supervisor, which can lead to reduced costs and better utilization of resources


Java servlets in application servers, and IIS CLR Handlers, WebAssembly sales pitch is so 2001.


Except you run a separate tomcat instance per app or better yet, embedded tomcat anyway.

Source: Someone who runs two dozen Tomcat containers.


Doesn't change the fact of the "clever" Webassembly marketing.


Well, I'm not sure about the EU AI Act, but I'm pretty sure they're still struggling with the 'Don't Turn Skynet On' Act.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: