Hacker Newsnew | past | comments | ask | show | jobs | submit | alfiedotwtf's commentslogin

> Has the technical and scientific community in the US already forgotten this huge breach of trust?

We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill


The problem I have is, conceptually a task always looks easy, but then as your coding, you hit several problems that are not simple to overcome - in fact, lot of times these issues turn into almost insolvable problems that blow out any time estimates ;(

This why you should use confidence intervals for estimates. Use a 80% confidence interval, for example. 10% of the time, you should come in under the best case estimate. 10% of the time, it should take longer than the worst case estimate.

How do you know if your estimate is good? Would you rather bet on your estimate or on hitting one of 8 numbers on a 10-number roulette wheel? If you prefer one of the bets, adjust your estimates. If you're indifferent between the bets, the estimates accurately reflect your beliefs.

(The roulette wheel is from the book, How to Measure Anything by Hubbard. Confidence interval estimates are from LiquidPlanner, https://web.archive.org/web/20120508001704/http://www.liquid...)


That’s why time limiting rather than estimating works for me. It forces me to contend with the question: “can I get this done today?” That’s usually an easier question to answer because it’s to tightly time bound. I’m not always correct but I’ll know tomorrow if I wasn’t, rather than next month!

When I’m asked on longer time frames, I’m much less confident but it’s still more concrete than the other way around.


It really depends. Anyone doing meaningful work will have hard time giving estimates. But churning up the next CRUD application with now special requirements can have no unknown variables. The question of course remains, why would anyone want to waste their time reinventing a spreadsheet.

>why would anyone want to waste their time reinventing a spreadsheet

I hope this is tongue in cheek, right? If not, here are some reasons:

1) spreadsheets embed "functions" via macros and macros are often flagged as malicious. Just combining native functions can get pretty complex.

2) in a spreadsheet, everybody sees the input, which is not always ideal

3) data types are controlled by users for the entire column or sheet, which can mess up formulas

I could probably think of additional reasons.


> One thing I really miss from the 80's and 90's: When you buy a product (hardware or software), its features and capabilities were stable

The complete opposite to this is Apple’s new UI for the iPhone. It’s so damn buggy I thought I accidentally clicked on Beta Testing!

… this has to be THE worst update thing they’ve pushed since forcing everyone to listen to U2


> Apple has the money. they can do better

But they haven’t. The latest Mac OS is atrocious. Glass? Are they literally trying to mimic 2006’s Compiz?


I think there are designers that think they are hot shit that don’t know graphical history, and go by “feel” for their design.

> Mandiant is Google's incident response consulting business

Consulting business? I was under the impression (from Google Reader) that if users aren’t in the millions, then they’ll kill the project. How could they also run a high-touch consultancy?!

> they're probably sick of going to the same old engagements

Hmm… consultancies love this type of recurring revenue - it’s easy money


Google is a quarter million person company (if you count full time, temps, vendors and contractors).

Google Cloud is basically an entirely different company than Search or Maps. Cloud will happily sell you $10m in compute a year and a value add $400k of security consulting.


> Consulting business? I was under the impression (from Google Reader) that if users aren’t in the millions, then they’ll kill the project. How could they also run a high-touch consultancy?!

Google also has the Project Zero which doesn't fit into Google business culture either. I wonder if Mandiant is paying for their payroll.


Project Zero has been around for 8 years before the Mandiant acquisition.

My bad. Still not sure which business unit is paying for their payroll.

The XDA and Compaqs etc were WAY ahead of what anyone else had (even better than Sony’s PDAs) and yet they totally fumbled their lead


I bought a 4G Nokia 3310 yesterday, and to be honest, it’s actually not bad!


Oh, has it been six months already (… since their last attempt)


> superloops

I’ve been doing async non-blocking code for decades, but this is the first time I e seen that word used? I’m assume you’re meaning something like one big ass select!() or is this something else?

> IMO one of the big reasons Arduino stayed firmly hobbyist tier is because it was almost entirely stuck in a single-threaded blocking mindset and everything kind of fell apart as soon as you had to do two things at once.

This. Having to do something like this recently, in C, was not fun and end up writing your own event management layer (and if you’re me, poorly).


Superloop is common terminology in the firmware space. They are cruder than a giant-state-machine-like case statements(but may use still them for control flow). They usually involve many non-nested if statements for handling events, and you usually check for every event one by one on every iteration of the loop. They are an abstraction and organizational nightmare once an application gets complex enough and is ideally only used in places where an RTOS won’t fit. I would not consider asynchronous frameworks like Embassy to be superloops.


This superloop pattern can also appear in more abstract scenarios as well.

The wildly popular ESPHome is also driven by a superloop. On every iteration the main loop will call an update handler for each component which then is supposed to check if the timers have elapsed, if there is some data coming from a sensor, etc before doing actual work.

This pattern brings with it loads of pitfalls. No component ought to do more than a "tick" worth of work or they can start interfering with other components who expect to be updated at some baseline frequency. Taking too long in any one component can result in serial buffers overrunning in another component, for example.


Superloop is arguably how every PLC that is programmed in standard way works.


I'm surprised nobody has put together a cooperative threading C framework using the -fstack-usage (https://gcc.gnu.org/onlinedocs/gcc/Developer-Options.html#in...) option supported by GCC and clang. With per-function stack usage info, you can statically allocate a stack for a thread according to the entry function, just like async Rust effectively does for determining the size of the future. Context switching can be implemented just like any other scheduling framework (including async Rust executors), where you call the framework's I/O functions, which could just be the normal API if implemented as a drop-in alternative runtime.

Googling I see people attempting to use -fstack-usage and -fcallgraph-info for FreeRTOS, but in an ad hoc manner. It seems there's nothing available that handles things end-to-end, such as generating C source type info to reflect back the computed size of a call graph based on the entry function.

In principle Rust might have a much tighter bound for maximum stack usage, but in an embedded context, especially embedded C, you don't normally stack-allocate large buffers or objects, so the variance between minimum and maximum stack usage of functions should be small. And given Rust's preference for stack allocation, I wouldn't be surprised if a C-based threading framework has similar or even better stack usage.


> I wouldn't be surprised if a C-based threading framework has similar or even better stack usage.

“… better stack usage, if you can keep it”

— Benjamin Franklin


> AI like chatgpt will dominate the user's experience

I hope not. Sure they’re helpful, but I’d rather they sit idle behind the scenes, and then only get used when a specific need arises rather than something like a Holodeck audio interface


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: