Hacker Newsnew | past | comments | ask | show | jobs | submit | niknetniko's commentslogin

I don't get what this means?

- There an option to compress images before uploading (which is probably on by default). Not unreasonable.

- Content is deleted if you do not access Google Photos in any way during two years AND do not have a paid storage plan. Not unreasonable.

- If you use more storage than you pay for during two years, content will be deleted. Again, not unreasonable.

I really do no get the complaint here.


I upload my pictures with quality q=X.

The next day I download a picture. Quality is q=Y, where Y < X.

One year after, I download the same picture. Quality is q=Z, where Z < Y < X.

So my pictures uploaded to Google Photos suffer from decay, whenever Google chooses to degrade quality.

None of your "not unreasonable" scenarios are involved.


As everyone is posting, a similar one exists for Belgium: https://trainmap.belgiantrain.be/

(It is not 100% live, "The train positions are calculated based on timetables, real time info and prognoses.")

Also fun are international trains, which are shown, but there is no map outside of Belgium.


Of the top of my head, there are also:

- ISO 8601: dates and times

- ISO/IEC 9899: the C standard

- ISO/IEC 14882: the C++ standard

> This isn't how serious comp-sci standards work (the TCP people just throw out RFCs).

This isn't how most _web_ standards work.


True, although for both C and C++, standards drafts are prepared on GitHub. At least for C (but I presume also for C++) you can check out a revision which is functionally equivalent to the ISO standard (but minus the branding).


Am I missing something here? I understand the "The IAB loves tracking users." part of the title, but where is the part about "But it hates users tracking them."? Based on the title, I would have expected an exception in their standard for themselves, or a story about how their own user tracking technology was used against them.

However, there is none of that. Just that the IAB does not want to make it easier for users to escape their tracking (which, given their purpose, is unfortunately entirely expected). What justifies "But it hates users tracking them."?


The later part refers to the plus addressing, like "han.solo+github@gmail.com", which people use to so that "If they later start receiving spam to that address, they know the service has leaked or sold their info.". Now the IAB requests that advertisers should normalise such addresses by dropping the part after the plus sign, and therefore effectively stopping users from "tracking" the advertisers.


This is dangerous as some mail servers could consider plus addresses unique.


(Web) developers have gotten lazier and simply don't care anymore. The fact of the matter is that if you don't host your email with one of the big three, some services are probably not working anyway. I'd also like websites to show me things like news and recipes without having them run javascript, but apparently this means I deserve white pages because writing HTML is too much of a hassle for the modern web developer.

It's quite sad to see. It's also the reason I'm using somethingunique@domain.tld;, if cyberstalkers start normalising to a domain, they'll only hurt their own business.


> I'd also like websites to show me things like news and recipes without having them run javascript, but apparently this means I deserve white pages because writing HTML is too much of a hassle for the modern web developer.

Often enough the content is there but just hidden until the JS loads for ... reasons, idk.


nobody cares, email, along the majority of the internet as "computers talking protocols", is dead. The absolute majority of email is handled by gmail or microsoft, accounting for >80% of MX servers in the wild by the data i had five years back. I'd imagine the share of the duopoly is even larger today, considering how difficult it is to get into inbox these days.


EMail is more alive than any other federated communication protocol on the internet. It is only rivaled by phones and physical mail.

Defeatism here helps noone.


In the linked document they say to only do it with @gmail.com addresses.


>and therefore effectively stopping users from "tracking" the advertisers.

I guess that's the nefarious explanation, but there's a more benign one: if you want to correlate user behavior, you need some sort of normalization, otherwise john.doe+apple@example.com and john.doe+amazon@example.com would show up as different "people" and cause match rates to suffer. Sure, getting tracked isn't great, but it's not exactly the hypocrisy rage-bait that the OP is implying.


Why would a person want advertisers to correlate their behavior? What’s in it for them?


In a logical world it would mean that you see fewer ads. If your matches increase, your price per ad goes up so the service needs to show you fewer ads to hit their ad revenue target for you.

But if you think that will happen I have an East River transportation startup in New York that is seeking an angel investor.


Revenue targets are certainly dynamic but avoiding user churn is important too.

So you want to keep cost per impression up. You would not want to saturate and devalue.

Better to play 10 ads at 10c each vs 20 ads at 4 or 5c, as high ad load impacts users propensity to return to the service.


> the service needs to show you fewer ads to hit their ad revenue target for you.

In a logical world, yes.

In a capitalist world, that revenue target goes up every year. Apple became the richest company on earth selling hardware, yet here they are now drowning their software with ads.


We really should just cut out the middle man here and ban ads entirely. If an ad broker wants to pay me to watch an ad, pay me directly.


I've come to the conclusion that we should just ban all advertising, completely. Any possible positives to allowing advertising are entirely dwarfed by the negatives.


I would love it if there were no ads. Seems like a dream though. Has any government tried it? If so, what kind of sneaky ads posing as content emerged? "Native ads" posing as content already exist. At least it's easy to tell the difference when the ad is out in the open and marked clearly.


> I would love it if there were no ads.

So would I; they're disfiguring ugly.

So would you ban shop signs? What about a shop-sign that simply said "Cafe"? Or "Meals"? That would be the end of chain stores (which I would not regret).

I don't mind shop signs; I do mind posters all over the street-scene.

The Post Office delivers about 4X as much unaddressed junk advertising pizzas and estate agents than real mail, and I object to that. In this country (UK), anyone can stuff whatever junk they like in your mailbox; in the US, I believe only USPS can put anything in your mailbox. Are USPS allowed to deliver unaddressed pizza fliers?

The best argument in favour of advertising is that it makes it possible for a new entrant to a market to make an impression; without it, markets would always be dominated by incumbents, give or take the occasional surprise. I don't know how to capture that benefit, without ending up with the whole world covered in billboards.


We already regulate shop signs - compare e.g. streets in Asia vs. Europe. Completely different what is tolerated. Tolerating informative signage (e.g. non flashy signs telling you what shop you are looking at) is not incompatible with banning advertisements.

> The best argument in favour of advertising is that it makes it possible for a new entrant to a market to make an impression; without it, markets would always be dominated by incumbents, give or take the occasional surprise. I don't know how to capture that benefit, without ending up with the whole world covered in billboards.

I don't think that argument holds much water as ads require a big capital investment. The main reason new entrants need to advertise is because the incumbents are already advertising so you neeed to compete there just to get back the base level of engagement.


I think this new sort of AI-powered language model assistant search will be interesting once it trickles into end user control. Injecting ads requires the model output to be under central control where the they can inject ads. But when we get to the point that we can just automate our browsers to fetch 1000 pages and generate summaries locally, ads will be toast. There is a massive battle for control over generalized computing brewing because the ad networks need to force us to not build.


> drowning their software with ads

Where? I use Apple hardware basically exclusively. Are they that good in hiding the ads, or are you exaggerating a bit?


GP is exaggerating, but they definitely do seem to be increasing the number of ads in apps. Apple News (even the paid News+) has tons of ads, often after a single sentence in an article. In AppleTV+ (a service I love!) they’ve removed the image cards of shows in your up next queue on the “What to Watch” page and replaced them with a big auto-playing audio and video preview of one of their shows that’s not in your list. There’s no way to get the old functionality back. I’ve stopped using the app other than from the Home Screen where I have it set to show my “Up Next” queue. It’s really disgusting that they’re making these changes and it’s really turning me off of their services which I loved until recently.


off the top of my head:

- App Store (biggest offender)

- Apple News

- Stocks

This year they'll be rolling them out into Apple Maps as well.


Open the AppStore app and look how much ads are stuffed there.


While that's true, the questions was specifically about the claim they are "drowning their software with ads". Providing a single example doesn't really support that claim.


You can get ads for their services (mostly Music an Arcade) in iOS settings (and IIRC System Settings on MacOS), unsolicited push notifications from App Store etc.


Ostensibly it's so that the user can get more relevant ads.

In practice, it's not of course, but that's the answer you'd get if you ask them.


Of course that's the goal. The IAB isn't an NSA front. The problem with advertising is not the primary goal, but all the secondary things that can happen.


lokedhs said "ostensibly" because the internet advertising industry has long maintained the pretence that tracking users and showing them relevant ads is helpful to the user, when the truth is the advertising industry cares about the opinion of users like the thanksgiving industry cares about the opinion of turkeys - which is to say, not at all.

There's a reason these things are opt-out rather than opt-in.

The answer to marcus0x62's question - why would a person want advertisers to correlate their behavior - is that they wouldn't, and if they want to advocate for their own self-interest they should install an ad blocker.


> the truth is the advertising industry cares about the opinion of users like the thanksgiving industry cares about the opinion of turkeys

Nobody asks for the steak’s opinion when planning a BBQ.


If it's the goal, you'd expect ads to be actually be more relevant and useful to users.

They don't, which has been shown in studies. What has been shown is that showing the same things people already bought give people regret which increases total amount of purchases.

In other word, the goal is to not to give users a good experiences watching ad's. It is to make them buy more, which is an orthogonal goal.


Oh, I see what you mean. I guess it depends on whose definition of "relevant" you follow.


Let’s say you’re DHS. You contact some person and have a conversation like this

Govt: “I need IP addresses, ideally cellular and known public wifi, of a person using this email address”.

Data broker: “Here’s the list including the most recent cellular IP address associated with that person at this timestamp and their most used public wifi locations.”

Govt: “Hey, cellular provider, where is this subscriber right now?”

Provider: “Here’s the lat/long, last seen 1 second ago. Happy hunting!”


How does that contradict what the parent poster is saying? Even though ad-tech companies make tracking individuals easier, it doesn't change the fact that it's still largely funded by advertising itself, not through some shady government shell company.


I disagree: The main problem with advertising is with its primary goal, which is manipulting people into excess consumption. There are of course other secondary issues as well, but even without them ads are already a net negative for society.


Yeah; that's not benign. I don't want my behavior being "correlated" by a shady group of companies whose sole purpose is learning how to better manipulate me for their own profit.


I never claimed as such. From the comment you replied to:

>Sure, getting tracked isn't great, but it's not exactly the hypocrisy rage-bait that the OP is implying.


> if you want to correlate user behavior

That in and of itself is nefarious.


I suppose we should start using aliases then instead.

Ultimately they cannot win this fight.


They can’t win this fight against people like us, but for the rest of people it’s a mess.


>> What justifies "But it hates users tracking them."?

Users can tell which site gave away their email address by using the variants discussed. It's not tracking in the same sense, but it does allow tracking who respects privacy. It also allows throwing away junk mail where someone required an email address just to (for example) make a sale or use their wifi.


It was a quicker way of me writing "but hates users being able to track what their members are doing with user data".

The reason my email address for this site is `+ycombinator@...` is that I can track when dang decides to go rogue and sell my email to a Nigerian Prince.

Think of it like sousveillance.


Have you ever received any spam addressed to that account? I've signed up for countless services over the past decade, the only ones to ever spam me were university consulting groups back from the time I was applying to grad school.


Yes. I frequently find services which - either maliciously or through incompetence - allow my email to be used for 3rd party spam.

Even some big companies aren't immune to a dodgy contractor walking off with a contact list.


Let’s say you use a “plus” formatted email, I.e foo+github@gmail.com

Then when you receive spam/unsolicited marketing emails, you can see to which email the spam was sent, and therefore which company sold your data.

This suggests the only way to keep this behaviour is to have your own email hosted and use a truly different email per service.


They don't want us to be able to see which websites are creeps and collaborators building secret dossiers on us.


> What justifies "But it hates users tracking them."?

Nothing, I think, since there’s no indication the normalized email will be used to send email.

The normalization is for connecting together identities against the wish of the user, which is a different issue.


I suspect that part of the title is referring to the section that discusses using different email addresses for different services, so you can tell who sold your email address.


> I recently discovered a big cache - in terms of file number, 6000 files - of basically every edit I did in the last 6 months in VS Code - called Local History.

Isn't this a feature? I wouldn't want my editor to remove my local history without me knowing. I frequently use this local history (in Intellij), for whatever reason (it's easier than git, the project doesn't have version control, I haven't committed yet, ...)


6 months is a very long time for this though. I imagine that I would almost never want local history past a day or two. Beyond that I would look to my version control system.

Of course if the data is small why not keep it.


I do think there should be a default expiry but I’ve absolutely went back many many months and found old versions of files in cases I didn’t have proper commits.


> These apps [single page web applications] also tend to feel snappier because page loads are not required for every request.

This isn't my experience at all, especially when network conditions are not great. The browsers has error handling and a progress bar. Single page web applications often have bad or no error handling and you have no idea if the request failed, the code errored or something else went wrong. You need to refresh the whole page if something goes wrong, which results in having to load a ton of JS every again, negating all possible savings in time and data usage.


One of my real pet hates is software developers who assume everyone's running their software on an excellent internet connection. Badly written SPAs are the worst offenders but I also pretty much gave up on any sort of regular gaming because pushing these enormous 10-20 GB updates over a sluggish connection just became insufferable. Add that to the constant and shameless fleecing of customers that's apparently the norm now and the enjoyment to effort ratio is just too low to bother with.


> One of my real pet hates is software developers who assume everyone's running their software on an excellent internet connection.

That could be said for a lot of assumptions developers make. Everyone has 32GB of ram, everyone has an SSD, everyone has an i7...

It is an old problem, but like almost everything else in computing for some reason it seems to have become much worse since about 2010.


The spread of capabilities is bigger now.

In 2000 a developer might have been developing on a Pentium 3 with 128 MB of RAM, but they could reasonably expect their audience to be using at least a 486 with 16 MB of RAM because that was the minimum spec for IE4.

Now you're stuck with trying to impress people with a Ryzen Threadripper and 64GB of DDR5, but your webapp still has to support everyone's iPhone 7 (with 2GB to share with iOS and everything else they have running) for as long as Apple does.


What I think is even more different is that someone with the 486 in 2000 was used to the idea that they wouldn't be able to run some software, but unable to run a website? Unheard of, it's just broken.


Apple still supports the older iPhones. For example, the iPhone 6 that doesn't get the latest iOS version anymore (stuck with iOS 12) is still supported by regular security updates (the last iOS 12 update was 54 days ago).

Apple supports (at least security wise) probably more devices than you think.


I was working on a proposal for a client who wanted to build a marketplace, but most of the vendors had low-end tech equipment, and he went with someone who had a lower quote. My biggest caution was that a limited number of developers actually understood what to do with slow internet connections and old tech. Marketplace failed... ignorance sometimes is the worst thing in people who believe that it works on my machine and will work on everyone else's.


> One of my real pet hates is software developers who assume everyone's running their software on an excellent internet connection

Maybe it's a process problem, not a developer problem. Like management prioritizes a dozen analytics trackers and ad partners without any tooling for performance testing on a range of devices.


I guess my language was a bit imprecise, by "software developers" I meant "people involved in the software development process" rather than the specific role of the person writing the code. I guess I should have said "software companies" or "companies developing software". Managers definitely deserve their share of the blame for demanding user-hostile bloat too, likely more blame than the developers.

On the developer end (as a developer myself) we definitely deserve some of the blame for embracing things like Electron with such zeal in my opinion. I don't care how much memory a developer's workstation has, there's still a lot of hardware in use that can't take the bloat. I'm not saying everything has to be a native app written in vi against an original print of The C Programming Language as Brian Kernighan and Dennis Ritchie wrote it in 1978 or it's automatically shit, but something like React Native for the desktop would be far less horrible in terms of resource usage I reckon.


It could be that the customers most likely to pay are on more recent hardware. So why bother catering to everyone else if there is no return on investment.

Or for free software, chances are it is add supported. So push as many ads and trackers until the churn rate gets too high or competitors take market share.

Or more generally, maybe the design just follows the money.


Could also be reverse: why pay for software that won't run on your computer anyway?


Absolutely, that's a calculated decision that companies make. It is fully expected that a certain market segment is not interested in being a customer at a given price point. Market segmentation is often done by OS version or device hardware profile.


An SPA requiring a backend connection is quite simply a distributed computer and all the usual caveats about those including network availability apply.


Counter point: Often I am faced with doing it right or doing it within budget for an activation that is designed to live for a few weeks only. I aim for 100% compatibility with different hardware, OS, browser, user experience, client expectation and non suicidal business practices.


> One of my real pet hates is software developers who assume everyone's running their software on an excellent internet connection.

It's not the developers' job to assume anything about users. It's the project managers'.


Infinite scroll pages are the worst with this..

You scroll and scroll, and scroll, and every time you reach some level "down", another section is loaded, then at one moment it stops. Something somewhere fails, no more new sections, no way to continue from that point, only a full refresh and a huge scroll down.


Which wouldn't be nearly as bad if they gave you an offset parameter, or updated the offset parameter in the URL which they almost never do.


Hello Patreon.


> the code errored or something else went wrong. You need to refresh the whole page if something goes wrong, which results in having to load a ton of JS every again, negating all possible savings in time and data usage.

Not to mention that some SPA doesn't maintain state, and all the sudden one need to jump through the the process all over again. Personally SPA seems to try reimplement a lot of browser feature all over again in client side JS.


I think it's just like anything that's gotten popular. SPAs are all over the place now, which means there are plenty of bad implementations along with the good ones. But generally you only notice the bad ones due to frustration. This isn't SPA-specific IMO. I've seen plenty of server rendered pages/sites that were also poorly done back when those were more the norm. Heck, I still help maintain a set of CGI scripts that are horrible and slow.


I agree and I hate two things about SPAs.

- Works terrible in bad networks.

- And the thing I hate the most is the shifting of images, links when the page is still loading. But this may be a implementation issue.


As is common when people rant about SPAs, your dislike (at least as written here) is not actually about SPAs but other things

> Works terrible in bad networks

Yes, software traditionally works shitty under bad network conditions unless the developer actively tests under bad network conditions or has previous experience of handling bad network conditions. This is as much true for anything developed ever that touches a network.

> the shifting of images

This is simply developers missing to add width and height attributes to their <img/> elements. This has been happening since the dawn of the <img/> element and is unlikely to disappear. Also has nothing to do with SPAs, same happens with server-rendered HTML.


> unless the developer actively tests under bad network conditions > This is simply developers missing

That's the whole thing. SPA = state. It requires a lot of dev time to properly handle everything. With stateless applications, you can simply refresh your browser.

The sluggishness is not only because of bad network conditions, but it's multiplied by the huge application that has to be sent over the network, application initialization, and the many subsequent network requests.


> The sluggishness is not only because of bad network conditions, but it's multiplied by the huge application that has to be sent over the network, application initialization, and the many subsequent network requests.

A "huge" application can be broken up with code splitting/dynamic imports. Initialisation can be seeded with serverside data or saved in browser storage between pages.

The only semi-unavoidable part is the "subsequent network requests", but even these can be sped up with caching, batching, etc.

> It requires a lot of dev time to properly handle everything.

But yeah, these things take effort


The network requests could be done with more intelligent apis

But if you take everything into account, you can also develop a really good native app.

This is not reality.


>Yes, software traditionally works shitty under bad network conditions

not everything is equally affected by bad network conditions, SPAs generally are very badly affected by bad network conditions, indeed what is a bad network condition for an SPA might be acceptable for a traditional static page.


SPAs can be built to work well offline. I've written them myself. There is nothing inherent in an SPA that make it poor at this, quite the opposite. SPAs have excellent tooling for offline use.

A SPA's network dependency or robustness totally depends on the product's design. Some types of applications lend well to offline first (anything where the user owns their content/data, todo, notes, documents, pictures, etc). Others are much more dependent on fresh data. Which pretty much means anything big enough it's unreasonable to replicate it to the device.

I'm a fan of "offline first" design and have been a proponent at various companies. To the point where I can build the feature in at very little additional cost if it is considered and decided in the design phase. Bolt-ons patterns are messy.

However, the reality is that very few customers see this as a significant advantage. Which means that it doesn't really translate to market success. If budget is the #1 priority I can't in good conscience advocate for offline first unless it's going to offer a significant win for the company somewhere.


I’m asking this from a genuine place of curiosity, because last time I checked a few years ago the answer was “terrible”. Has the “save a file locally and load it again later” story for SPAs improved any?


There is no universal local file system API, so "terrible" might be a reasonable description of this.

There is a Chrome-only local file system API.

Generally SPAs are limited to browser provided storage like IndexedDB.


I have implemented plenty of SPA (for years) that use the pattern of A) allow the user to download their current state as JSON/EDN and then B) allow the user to upload a new state from JSON/EDN file and continue from where they last downloaded their state.

This has been easy to implement for as long as I can remember, so not sure why'd you say it's terrible. What stopped you previous times?


Ok, so that’s kind of what I figured was the current state of things. Compared to being able to hit “Save” and have a local copy updated, that’s a pretty subpar workflow. I get why it’s like that (preventing a sandboxed website from being about to update files on the local filesystem) but...


Why do you write <img/> in authoritative tone? It's not 2000 anymore where we pretended XHTML or polyglot HTML is a thing. It's particularly odd to see that old cargo cult idiom (or, worse, with additional random spaces) used in a post lecturing users about HTML5-era SPA supremacy.


I don't know what you mean, self-closing tags are part of the HTML (5) standard: https://html.spec.whatwg.org/#self-closing-flag


? What's wrong with XHTML? <img /> is clearer than <img> for anyone familiar with XML, and XHTML documents are easier to parse (e.g. can be processed with XSLT stylesheets).


Nothing wrong with XHTML per se (did an internal site using XSLT in early/mid 2000s), but XML/XHTML has been on the way out for the better part of this millenium on the web at least. If you're developing web content and/or browser apps, you should know HTML IMO, and XML is the least of your concerns. Not looking forward to apps mixing XSLT and JavaScript ;)

Can't stand "<bla />" though with that pointless/clueless space. The only place where I've encountered these are older JSP, FreeMarker, or Thymeleaf/Spring MVC apps (ugh).


JSX (React's default markup language) expects all tags to be closed, and self closing tags are valid. If you spend a significant time writing React apps (with JSX) then it becomes pretty second nature to write self closing tags. It's not exactly XHTML (I can't think of any other of XHTML's idioms it uses).


"- And the thing I hate the most is the shifting of images, links when the page is still loading. But this may be a implementation issue."

Thats just bad design.


So at least in this aspect, bad design sense to have gotten a lot easier.


I suspected that too, but I do not know enough about these reactive frameworks to be 100% sure. But as a user of such apps, it is absolutely frustrating.


To stop jumping you need to explicitly size yet-to-load sections. Without JS this means giving images explicit sizing, with JS this means any section can be dynamically loaded and, therefore, jump. So good design takes into account dynamic loading and places size bounds accordingly.

Reactive libraries/frameworks don't explicitly make this worse or better, except their presence implies a high chance of dynamic loading and, therefore, more opportunity for bad design. In addition most component libraries fail to communicate /who/ needs to size a component and if it is ever dynamic. It really doesn't help that most 'official' examples fail to resolve these issues.


Something endemic to SPAs.


A truly intolerable thing about SPAs and JavaScript is that regular HTTP caching of images and fonts had to be limited because JS APIs can be and are used for fingerprinting, driving the whole web thing ad absurdum.

Switching off JS/fingerprinting doesn't really help either since it'll just disproportionally benefit Google's stronghold on web analytics even more.


Fingerprinting is not just JS. Fingerprinting is possible just using CSS rules alone.

The bigger issue, which seems to be what you are complaining about seems to be the gaping hole in privacy provided by 3rd party storage/resources. That is not a particular problem with SPAs, and can even be exploited without JavaScript.

I'm not sure where you're coming from regarding Google and web analytics. When 3rd party storage is gone (or partitioned) it sounds like everyone would be on the same page in terms of what data they can collect.


"shifting of images"

So much truth, not once I was trying to click on some image that had link underneath, and it suddenly moved to some other place and I ended clicking something different.


Exactly. The only time I felt the supposed "snappiness" is for documentation websites (so mainly text) that pre-fetch all links.


Yeah, especially when it's multiple sequential loads, so instead of a large HTML blob it takes forever. Usually with 270ms+ if it's US East (from Australia).


Exactly.

A flawless SPA backed by a flawless API that produces responses in tens of milliseconds is superior to the old ways.

But a trash SPA backed by an API that produces responses eventually if ever and requires me to open the browser's developer tools to find out what happened? You can keep it. Old-school frameset sites are better than that.


The site mentions:

> Visual Mind is an AI engine specifically designed for understanding and scoring visual appearance of a website. Visual Mind has analyzed over a million websites to achieve an accuracy rate of over 97%.

How does accuracy work in a project like this, where the result is subjective?


They have an contract with MusicBrainz. They are listed on https://metabrainz.org/supporters/tiers/4.

> The Unicorn tier is for large companies or companies that would like to have a reciprocal relationship with our foundation. If you need special guarantees, indemnities or require us to sign your contract for a data license, please select this tier. If you have another creative idea you would like to propose, please also select the unicorn tier.

> For any of these cases, please detail your request in the company information field and we will work with you to fit your company's mythical situation. We will also find an appropriate monthly support amount to our non-profit foundation of $1500 or more per month. Please always consider enabling the growth of our non-profit foundation and the continuous growth of our metadata!


A lot of EE folks I've met _think_ they are competent in software, but I'd beg to differ. Similarly, I've met plenty of CS majors who know their way round hardware.


Depends on definition of "software competency". Lofty functional programming in Haskell is quite different from low level driver programming in C. I doubt you could take a world class Haskell programmer and throw them in linux usb driver land and expect them to perform well.


This is pretty cool.

However, I do agree with some of the other contents that the style is a little dry for a webpage.

I think it would be very useful as the "print" style for articles or blog posts, without having to pipe through pandoc: most browsers support print to pdf.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: