Hacker Newsnew | past | comments | ask | show | jobs | submit | procaryote's commentslogin

It's a way to dig yourself out of the hole without a full rewrite, and with a smaller retraining effort for your developers.

The best advice is probably "don't", as it usually is to most people setting out to design a programming language, and even more so for people setting out to do a mostly backwards compatible extension to a language that isn't suited for what you want it to do.

The second best advice is probably, do just c with classes. Allow defining your own allocator to make objects of those classes. It's fine if objects built with one allocator can only refer to objects built by the same one.

Don't do templates, just do the minimum needed for a container type to know what type it contains, for compile time type checking. If you want to build a function that works on all numbers regardless if they are floats or complex or whatever, don't, or make it work on those classes and interfaces you just invented. A Float is a Number, as is an Integer. Put all that cleverness you'd waste on templates into making the compiler somewhat OK at turning that into machine types.

Very specifically don't make the most prominent use of operator overloading a hack to repurpose the binary left shift operator to mean `write to stream`. People will see that and do the worst things imaginable, and feel good about themselves for being so clever.


Is modern C really much more complicated than old C? C++ is a mess of course.

I don't write modern C for daily use, so I can't really say. But I've been re-learning and writing C99 more these days, not professionally but personal use - and I appreciate the smallness of the language. Might even say C peaked at C99. I mean, I'd be crazy to say that C-like languages after C99, like Java, PHP, etc., are all misguided for how unnecessarily big and complex they are. It might be that I'm becoming more like a caveman programmer as I get older, I prefer dumb primitive tools.

C11 adds a couple of nice things like static asserts which I use sometimes to document assumptions I make.

They did add some optional sections like bounds checking that seem to have flopped, partly for being optional, partly for being half-baked. Having optional sections in general seems like a bad idea.


If you don't have compiler restrictions, C23 is also a pleasure to write. `typeof`, `constexpr`, `#embed`, `nullptr`, attributes and all.

The big new thing in C11 was atomics and threading.

IDK about C11; but C99 doesn't change a lot compared to ANSI C. You can read The C Programming Language 2nd edition and pick up C99 in a week. It adds boleans, some float/complex math ops, an optional floating point definition and a few more goodies:

https://en.wikipedia.org/wiki/C99

C++ by comparison it's a behemoth. If C++ died and, for instance, the FLTK guys rebased their libraries into C (and Boost for instance) it would be a big loss at first but Chromium and the like rewritten in C would slim down a bit, the complexity would plummet down and similar projects would use far less CPU and RAM.

It's not just about the binary size; C++ today makes even the Common Lisp standard (even with UIOP and some de facto standard libraries from QuickLisp) pretty much human-manageable, and CL always has been a one-thousand pages thick standard with tons of bloat compared to Scheme or it's sibling Emacs Lisp. Go figure.


C++ is a katamari ball of programming trends and half baked ideas. I get why google built golang, as they were already pretty strict about what parts of the c++ sediments you were allowed to use.

Not Google actually, but the same people from C, AWK and Unix (and 9front, which is "Unix 2.0" and it has a simpler C (no POSIX bloat there) and the compilers are basically the philosophy of Golang (cross compile from any to any arch, CSP concurrency...)

Also, the Limbo language it's basically pre-Go.



No.

https://en.wikipedia.org/wiki/Alef_(programming_language)

https://en.wikipedia.org/wiki/Limbo_(programming_language)

https://en.wikipedia.org/wiki/Newsqueak

https://en.wikipedia.org/wiki/Communicating_sequential_proce...

https://doc.cat-v.org/bell_labs/new_c_compilers/new_c_compil...

It was amalgamated at Google.

Originally Go used the Ken C compilers for Plan9. It still uses CSP. The syntax it's from Limbo/Inferno, and probably the GC came from Limbo too.

If any, Golang was created for Google by reusing a big chunk of plan9 and Inferno's design, in some cases even straightly, as it shows with the concurrency model. Or the cross-compiling suite.

A bit like MacOS X under Apple. We all know it wasn't born in a vacuum. It borrowed Mach, the NeXTStep API and the FreeBSD userland and they put the Carbon API on top for compatibility.

Before that, the classic MacOS had nothing to do with Unix, C, Objective C, NeXT or the Mach kernel.

Mac OS X is to NeXT what Go is for Alef/Inferno/Plan9 C. As every MacOS user it's using something like NeXTStep with the Macintosh UI design for the 21th century, Go users are like using a similar, futuristic version of the Limbo/Alef programming languages with a bit of the Plan9 concurrency and automatic crosscompilation.


That's wonderful how you tied those threads together to describe Go's philosophical origins. I'm having a great time exploring the links. And the parallel with NeXTSTEP is fascinating too, I've been interested in that part of software history since learning that Tim Berners-Lee created WorldWideWeb.app on the NeXTcube.

Not just philosphical; I've read somewhere that the first Go releases in order to bootstrap themselves they bundled the plan9 forked/simplified C compilers inside. Later releases are written in Go themselves.

Lowering the barrier to create your own syntax seems like a bad thing though. C.f. perl.

From your faq: "We maintain zero logs of your activities. We don't track IP addresses, …"

Front page says "zero logs"

Some logs, including specifically datapoints you have promised not to log, but you mean well (?) is pretty different from zero logs


Fwiw, zero logs in that context is usually in the relation to requests through the VPN, whereas this discussion is about requests on their homepage? Or did I misunderstand something here?

They're good enough for fingerprinting and matching against other logs.

Also:

> // What we DON'T collect:

> - IP addresses (not logged, not stored, not tracked)

> - Usage patterns (no analytics, no telemetry, nothing)

> - Device fingerprints (your browser, your business)

so, I've read one blog from this company, and already they're lying or incompetent


i hate to point it out, but that was written by an llm that probably wasn't prompted precisely enough to not make up comforting thoughts like that

Indeed, the whole thing reads like it was written by an LLM.

is that faster to say than do, or is it an accessibility or while-driving need?


I don't understand that use case at all. How can you tell it to do all that stuff, if you aren't sitting there glued to the screen yourself?


Because typing on mobile is slow, app switching is slow, text selection and copy-paste are torture. Pretty much the only interaction of the ones OP listed is scrolling.

Plus, if the above worked, the higher level interactions could trivially work too. "Go to event details", "add that to my calendar".

FWIW, I'm starting to embrace using Gemini as general-purpose UI for some scenarios just because it's faster. Most common one, "<paste whatever> add to my calendar please."


As in "always run a network firewall" or "keep the IP secret"? Because I've had people suggest both and one is silly.


A network firewall is mandatory.

Keeping the IP secret seems like a misnomer.

Its often possible to lock down the public IP entirely to not accept connections except what's initiated from the inside (like the cloudflare tunnel or otherwise reaching out).

Something like a Cloudflare+tunnel on one side, tailscale or something to get into it on the other.

Folks other than me have written decent tutorials that have been helpful.


The raspberry pico is much nicer to work with, if you're looking for an alternative. It has dual core if you need it, and the fun little IO coprocessors if you want to get really low level. The pico2 even has a risc-v mode

The process of getting a binary onto the board is just dragging a file, and on linux at least you can script it with picotool


+1, if only for the documentation. If you haven’t, skim through it: https://pip.raspberrypi.com/documents/RP-008373-DS-2-rp2350-... it’s truly unlike any reference manual I’ve ever read. I will happily pay a few extra cents at modest volumes for a chance to get the detailed technical details and opinions from the design team.


The flipside of this is that the RP2xxx has rather poor hard IP, and the PIO is not quite powerful enough to make up for it.

They are great for basic hobbyist projects, but they just can't compare to something like an STM32 for more complicated applications.

They are a pleasure to work with and I think that they are great MCUs, but every time I try to use them for nontrivial applications I end up being disappointed.


STM32 is great!

> nontrivial applications

Out of curiosity, where do you find that you’re hitting the limits of what it can handle?


To give a very basic example: its times can't do input capture. This means you have no easy way to do high-accuracy pulse time measurement. Compare the two datasheets, and the STM33's timers literally have orders of magnitude more features.

Only having two UARTs can be limiting - and PIO is a no-go if you want offloaded parity checking and flow control. The PIO doesn't have an easy external clock input. No CAN or Ethernet makes usage in larger systems tricky. There's no USB Type-C comms support. Its ADC is anemic (only 4 channels, with 36 io pins?). There are no analog comparators. It doesn't have capacitive touch sensing. There's no EEPROM.

None of them are direct dealbreakers and you can work around most of them using external hardware - but why would you want to do so if you could also grab a MCU which has it fully integrated already?


>This means you have no easy way to do high-accuracy pulse time measurement

is 2.5ns (https://github.com/gusmanb/logicanalyzer) to 3.3ns (https://github.com/schlae/pico-dram-tester) resolution not enough for you?


That is exactly the problem: you need to use PIO to constantly read the pins, and analyze the bitstream in software. At high speeds this takes up a substantial fraction of your compute resources, and it makes any kind of sleep impossible.

On a STM32 you can just set up the timer and forget about it until you get a "hey, we saw a pulse at cycle 1234" interrupt. The two are not the same.

My argument wasn't "this is completely impossible", but "this is needlessly complicated".


Thank you for the really detailed reply.


You can buy custom RP2040 boards and attach GPS. My projects are paired with an Si5351A and a 0.5 ppm TCXO. GPS gets you 1PPS


Yes, but the goal was "accurate capture of timer count on input pulse", not "get a 1PPS pulse somewhere on your board".


Agreed; RP2040 doesn’t have true timer input-capture like STM32 (no CNT->CCR latch on edge). That criticism is fair.

What Pico/RP2040 projects do instead is use a PIO state machine clocked from the system clock to deterministically timestamp edges (often DMA’d out). It avoids ISR latency and gives cycle-accurate edge timing relative to the MCU clock. It’s not a built-in capture peripheral, but it achieves the same practical result.

If you want a drop-in hardware capture block with filtering and prescalers, STM32 is the better choice. RP2040 trades fixed peripherals for a programmable timing fabric.


They're also very poor value for money if you need millions of them.

There are similar chips at a quarter of the price.

Obviously for hobbyist stuff, $1 doesn't really matter.


Can you give an example of a chip with software-defined IO coprocessors that is 1/4 the price? The pricing I’m getting on the RP2350 is 0.6EUR per chip.

When I’ve compared to other dual-core SoCs with programmable IO, like NXP with FlexIO (~€11) or ESP32 chips with RMT (~€1) they are much more expensive than the RP2350.. is there a selection of programmable IO chips I’m missing?


That's the thing: with proper dedicated peripherals you don't need the software-defined coprocessors.

Sure, they are great if you want to implement some obscure-yet-simple protocol, but in practice everyone is using the same handful of protocols everywhere.

Considering its limitations, betting on the PIO for crucial functionality is a huge risk for a company. If Raspberry Pi doesn't provide a well-tested library implementing the protocol I want (and I don't think they do this yet), I wouldn't want to bet on it.

I think they are an absolutely amazing concept in theory, but in practice it is mostly a disappointment for anything other than high-speed data output.


In Cortex M33 land $15 will get you an entire NXP (or STM) dev board. An MCX-A156 will set you back about $5 which is about on par with an STM32H5. You can go cheaper than that in the MCX-A lineup if you need to. For what I'm working on the H5 is more than enough so I've not dug too deep into what NXP's FlexIO gives you in comparison. Plus STM's documentation is far more accessible than NXP's.

Now the old SAM3 chip in the Arudino Due is a different beast. Atmel restarted production and priced it at $9/ea. For 9k. Ouch. You can get knockoff Dues on Aliexpress for $10.

Edit: I'm only looking at single core MCUs here. The MCX-A and H5 lineups are single-core Cortex M33 MCUs. The SAM3 is a single core Cortex M3. The RP units are dual core M33. If the RP peripherals meet your needs I agree that's a great value (I'm seeing pricing of $1+ here).

Edit2: For dual core NXP is showing the i.MX RT700 at around $7.


People are discussing Arduino alternatives, so yes, we are firmly within hobbyist territory.


That's true in general, but people do use these hobbyist boards as an alternative to a manufacturer dev board when prototyping an actual product.

It's reasonably common in the home automation space. A fair few low volume (but still commercial nevertheless) products are built around ESP32 chips now because they started with ESPHome or NodeMCU. The biggest energy provider in the UK (Octopus) even have a smart meter interface built on the ESP32.


To "yes, and..." you, the whole RP2040 microcontroller line is great and I would encourage folks to support the smaller maker/OSHW companies on Tindie[1] who use it.

[1] https://www.tindie.com/search/?q=rp2040


Flashing can be easy, sure. Compiling that binary, including library management, is not, unless you’re using something like micropython. CMake is not hobbyist/student-friendly as an introductory system. (Arduino isn’t either, but platformio with Arduino framework IS! RPi refuses to support platformio sadly)

Arduino took over for 3 reasons: a thoughtful and relatively low cost (at the time) development board that included easy one-click flashing, a dead-simple cross-platform packaging of the avr-gcc toolchain, and a simple HAL that enabled libraries to flourish.

Only the first item, and a bit of the second), is really outdated at this point (with clones and ESP32 taking over the predominant hardware) but the framework is still extremely prominent and active even if many don’t realize it. ESPHome for example still will generally use the Arduino HAL/Framework enabling a wide library ecosystem, even though it’s using platformio under the hood for the toolchain.

Even folks who “don’t use Arduino any more” and use platformio instead are often still leveraging the HAL for library support, myself included. Advanced users might be using raw esp-idf but the esp-idf HAL has had a number of breaking API changes over the years that make library support more annoying unless you truly need advanced features or more performance.


CMake doesn't spark joy, but it's not something you need to touch constantly. I figured out how to set up a basic cmake file and now I mostly need to touch it to set a project name, add or remove modules etc.

It was a while since I used arduino, but I remember having a harder time setting up a workflow that didn't need me to touch the arduino IDE.


How long does raspberry pico run on CR2032?

I'm asking because used Arduino ide to program STM32L011 and it would run for months or even years.


I believe you need 5v to run the regular $4 pi pico board. The chip only requires 3.2v though so maybe it’s not a hard requirement? There are probably other lighter weight RP2040 boards but I don’t think months or years-long low power usage was an intended goal— it’s $4 (still!) with 40 GPIO pins and PIO and it runs micropython for people scared of/chafed by C— it’s a prototyping/hobby tool.


Pi Pico has a lower power mode that sleeps when not in use that draws well under 5v, but you the programmer have to activate it. I think it is called “lightsleep” but may be wrong.


Huh, interesting. I haven’t played around with one in a few years.


It has sleep current of 200microamps, so no longer than 40 days…


The STM32L011 in no way requires the arduino IDE; your code would likely compile with GCC just fine. The pico would probably work if you redesigned your project, but your hardware very likely doesn't need to change, just the software you're using.


“ The process of getting a binary onto the board is just dragging a file, and on linux at least you can script it with picotool”

Even easier if you setup debugging using another pico, debug probe or even a Pi (not sure if this works on the 5)


>It has dual core if you need it, and the fun little IO coprocessors

I think you're missing the point of what made arduino so popular. It's not the HW itself, it's that you can plug in whatever display, sensor or motor driver out there, and there's ready made templates in the IDE that gets you running immediately, without you having to know anything about how the HW or SW works under the hood.

The lack of dual cores or "fun IO coprocessor" whatever fun is in that context, was never an issue for the arduino.

There's a virtually unlimited number microcontrollers and boards out there for tinkering or production, that are more powerful and have more features, but they all have a higher technical barrier to entry than the standard Arduino out of the box.

I don't wanna have to read datasheets and erratas just to learn how to use a second core, deal with shared memory between cores, or how to configure the GPIO of the "fun IO coprocessor" just to get a LED blinking to work. That's not what fun is to a lot of people. Fun is getting the motor spinning until my coffee finishes brewing and that's where the Arduino ecosystem USP was versus other more powerful platforms.


> I don't wanna have to read datasheets and erratas

I recently started programming Arduino for profit and you need to do exactly that, because the libraries range from somewhat buggy to completely broken. They so often just write into random other registers and if it works it is only do to the chip already working without any configuration and the library not breaking things too badly.


This is from a child comment that is dead, but I still wanted to answer:

> szundi

> If you go mainstream with your requirements, you don’t step on these though

Absolutely not. I am talking about things like the example in the README, which actually doesn't do anything, because they forgot the shift to make it write into the right field. Or they added "support" in 2012 for the only chip which is still sold, but forgot to update the register addresses, so now you have code for a newer chip, which uses the register addresses of the old chip. This does not work with either chip. And this is all with the libraries "officially" "provided" by Arduino.


If you go mainstream with your requirements, you don’t step on these though


The RP2xxx also comes with excellent documentation and libraries. If anything, with the drag-n-drop flashing it is even easier to work with than an Arduino.


>The RP2xxx also comes with excellent documentation and libraries

Are they more in number and easier to use than the Arduino libraries?

>If anything, with the drag-n-drop flashing it is even easier to work with than an Arduino.

Why do you think the Arduino is more difficult than "drag-n-drop flashing" by comparison? Do you think one click is more difficult?


From a practical end user perspective, being able to buy a device, and download and install binaries onto it to make it perform a specific purpose by plugging it in and dragging the file over, is considerably easier than installing an IDE, and downloading compiling and installing from source.

Look at how Ben Eater built and set up the SIDKPico to serve as a SID audio chip in his 8 bit breadboard computer here: https://www.youtube.com/watch?v=nooPmXxO6K0


> Are they more in number and easier to use than the Arduino libraries?

It's not either/or, beyond what's in the native SDK RP2 boards also benefit from the Arduino ecosystem via the excellent and well maintained https://github.com/earlephilhower/arduino-pico


> Are they more in number and easier to use than the Arduino libraries?

I haven't done a direct comparison, but considering that the hobbyist ecosystem (which is the main source of those libs) is shifting over, it is just a matter of time.

> Why do you think the Arduino is more difficult than "drag-n-drop flashing" by comparison?

Because you need to install an IDE and mess around with things like serial drivers - and it gets a lot more complicated if you ever have to flash a bootloader. It's not hard, but it's definitely not as trivial as the RP2xxx's drag-n-drop.


I get the impression that pico with micropython does this pretty well. You can ignore the second core and PIO if you don't want them. They won't hurt you.


if you rely on hobbyist libraries, you will eventually have to read datasheets. They usually only contain the barest minimum possible that one can still call "working". Usually most of the functionality of the device you're using is just not supported. At which point it's datasheet time.

And before long, you'll find yourself reading datasheets first and doing your utmost to avoid the "ready made templates"


If you've run a microservice stack or N at scale with good results, someone saying it's impossible doesn't look pragmatic


I’m not commenting on the pragmatic part.

My thesis is logical and derived from axioms. You will have fundamental incompatibilities between apis between services if one service changes the api. That’s a given. It’s 1 + 1 =2.

Now I agree there are plenty of ways to successfully deal with these problems like api backwards compatibility, coordinated deploys… etc… etc… and it’s a given thousands of companies have done this successfully. This is the pragmatic part, but that’s not ultimately my argument.

My argument is none of the pragamatisms and methodologies to deal with those issues need to exist in a monolithic architecture because the problem itself doesn’t exist in a monolith.

Nowhere did I say microservices can’t be successfully deployed. I only stated that there are fundamental issues with microservices that by logic must occur definitionally. The issue is people are biased. They tie their identity to an architecture because they advocated it for too long. The funniest thing is that I didn’t even take a side. I never said microservices were better or worse. I was only talking about one fundamental problem with microservices. There are many reasons why microservices are better but I just didn’t happen to bring it up. A lot of people started getting defensive and hence the karma.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: