Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are a number of reasons to do this. This sort of setup typically has the VM runtime flashed into the microprocessor's program space with the interpreted byte code stored in data space --- either internal or external.

1) It is obviously not as fast as native but it is fast enough for a lot of applications. In embedded work, you don't get extra credit for being faster than necessary. Speed critical functions like communications are mostly handled at native speed by the VM.

2) Code size. Interpreted byte code (minus the VM) can be smaller and less redundant than native. And by adding cheap external storage, code can easily expand beyond the native program space of the micro.

3) Easy remote updates. Byte code can be received and stored without making changes to the micro's program code (no re-flashing required). It's possible to fix bugs and make changes or even completely repurpose the hardware from afar.



>In embedded work, you don't get extra credit for being faster than necessary.

For battery powered devices, you absolutely do. When you're talking over a slower protocol, being able to put the processor to sleep for 95% of the time can translate to pretty massive power savings.


> In embedded work, you don't get extra credit for being faster than necessary.

You absolutely do when you can cut power requirements and get by with cheaper CPU/hardware. I ran a whole consulting business redesigning poorly designed devices and redoing firmware for cost reduction. How does one decide “necessary”, what is necessary in the short and long term are often not the same.


3 is a big one, I think. Using an interpreted language speeds up the development cycle to prototype, change, release, and iterate. And for some purposes it's fast and small enough, that the trade-off is worth it.


This makes a lot of sense. However, it makes me wonder how big is the new attack surface for remote upgrades/updates.

You need to implement a safe updater (with remote protocols) on VM level. And I guess you can never upgrade the VM itself, or if you can, it adds some extra complexity, or physical access.

There also need to be some kind of signature validation for every release, which means that device needs to perform some cryptographic operations and store at least tamper-proof public keys.


I can't really see how this is different from a native-code based device, especially one which is actually following good practice by not trusting what's in flash. Every stage of the boot chain still has to validate the next - there's just one more layer on top where the application is the runtime VM and it has to validate sub-applications / managed code.


I’m not sure if you get the controllers part of embedded microcontrollers. They’re not for time-independent deep thoughts, they are for controlling devices and peripherals in real time.

1) you get shorter battery life for slower code. Also, everything is speed critical anyways. It sucks when I’m controlling my toy robot and it’s trying to drive into the floor while doMyStuff(); has its head stuck in float math.

2) external anything is added cost to everything; adding an SPI Flash for code to otherwise done board costs, I don’t know, $2 for the chip, $0.02 for capacitors and resistors, 2-3 days to redo the board, software… can I just make it a want item for v2.0?

3) why do I have to do it wireless? Can I just snatch a prototype from test batch, wire it to debugger and leave it on a desk? Do I really have to manage these keys, and why are you going to be signing it off for release if it’s not working?

Embedded devices are not like a Nintendo Switch, it’s like Switch Joy-Cons, or even buttons on Joy-Con sometimes. They are not like nail-top autonomous Pi calculation device. Admittedly, Nintendo update Joy-Con firmware sometimes, but very rarely, and they don’t make Switch playing kids wait for X button input to finish processing. The buttons are read, sent out, and received. It just makes no sense that adding drag to real-time computing this way would help a lot.


>"In embedded work, you don't get extra credit for being faster than necessary"

You do get credit for using the cheapest and lowest cost MCU for the task which directly depends on performance of the code. In case of battery operated devices it is even more important.


The debuggability is also far better I expect, as a person who has spent hours tracing some crash deep in LwIP because of a bad flag or wrong interrupt priority.


Yes.

Assuming you're working with a quality VM and drivers, development speed is also improved. A lot of low level details have already been worked out in the VM thus freeing the programmer to work at a higher level.


LwIP (especially with Simcom’s SIM7000 via PPPoS underneath it) gives me PTSD. Even with decent JTAG debugging, it’s such a pain.


I think it's especially scary when most devkits come with it pre-integrated by the manufacturer, but the quality of that integration varied widely. My experience is nearly a decade out of date at this point, but when I was last going Cortex M3 stuff, I found that the integration of TI/Stellaris was excellent, and the integration on STM32 was one landmine after another— and the same held true for the USB stacks, even for stuff that should have been dirt simple like just emulating a serial port.


Yeah the STM32 dev kit (CubeMX etc) is all kinds of horrifying. ESP-IDF is better in some ways, worse in others. Its LwIP integration (esp_netif) is okay but has some interesting bugs you can hit. Like creating sockets leaking memory even after the socket is closed! Yay!


3) isn't really practical without some storage mechanism though. Sure, you can make a change that sits in ram until the next power cycle, but you could do that with firmware too if you plan for it. Whether you store the raw data in executable flash or in some external eeprom doesn't really change the workflow much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: