Hacker News new | past | comments | ask | show | jobs | submit login
Cylon: JavaScript framework for robotics, drones, and the Internet of Things (github.com/hybridgroup)
90 points by nateb2022 10 months ago | hide | past | favorite | 51 comments



I write embedded stuff for many of the popular IoT chips/modules/devkits/boards.

So, yes, there's a lot of high-level GC'ed languages that "run" on these systems.

For hobbyist one-off projects (where you can overspec the hardware by 10x without any negative financial impact to the project) they are great. For mass-production of a device, the extra RAM (and resources in general) that they require tend to be double what you would ordinarily need if written in C (or a crippled dialect of C++).[1]

For the best high-level logic control of embedded devices, you cannot beat esphome, and that's because that project converts the high-level logic (specified in YAML) to native C and/or C++ code, which is then pushed OTA to the device. That approach keeps the firmware as lean as possible.

[1] My best experience with stuffing a more abstract language into something like an atmega328 (I believe that's what's used for Arduino) is with a special purpose-built compiler that compiled a PLC type language of my own design (only for controlling and switching digital IO and reading ADC) to a byte code, which was run by a tiny interpreter on the atmega, using three bytes per instruction+operand.

Such a scheme allowed tons of conditional logic to be stuffed into the flash area, as the 'interpreter' on the chip would read and execute 3 bytes at a time from flash. The native-code version of the same conditional logic could easily be 10x the size.


In interested in learning why say, a device would be much more expensive if we use 1GiB of DRAM chips instead of 32MB. The chips themselves have the same physical footprint, power requirements, and as long as you pick one that is compatible with your µC or SoC you don't need to change that either. That leaves the cost of the chip, where you might need to spend a few cents more. I'm pretty sure the consumer doesn't care if the widget costs $21.55 or $21.59. The same would apply to NAND flash storage.

I do get that if you have to produce millions of widgets that a few cents here and there might add up, but if you just exchange one part for a physically similar but higher capacity part, and the list price difference is really small, why would that be such a hurdle? I'm not talking about redesigning all the power supplies, adding a PMIC or building an entire computer, but even in the WRT54G days you could just solder replacement DRAM and NOR chips and be done for a few cents for a 500% increase in capacity. In later models you could still do that but the NOR became NAND and BGAs are harder, but it's still pretty easy and cost-effective.

In the EE world, designing for manufacture does try to squeeze every fraction of a cent out of everything, while in the software world using 10MB of RAM instead of 1MB is fine as long as you decode your PNGs correctly (via an earlier libpng reference implementation comment a few days ago). Even at volume, I doubt that saving a tenth of a cent really matters until you hit some extreme production numbers. (even 10 million units would at best save you $10k)

Mechanically there would be something to consider, i.e. connectors vs screws vs solder vs glue etc, those all have a direct impact on how reliable a connection is, how long it will last, and how easy it is to manufacture (and how easy it is to take apart later). But fractions of a cent when compared to all the other aspects?


Well, there's two issues here. The first one is the obvious one: BoM cost.

> In interested in learning why say, a device would be much more expensive if we use 1GiB of DRAM chips instead of 32MB.

Those are different product categories. The IoT devices tend to range from ~400KB of RAM to around 1MB of RAM. Most have the RAM in package, not external, so putting more RAM is expensive because it is done during fabrication.

The most recent IoT product I delivered for a client was based on a esp32-C3 (RISC-V) module with under 300KB of usable RAM and 4MB of flash, of which exactly 1MB can be used for the program code.

It cost $5. The one which allows you to have slightly over double the program code (around 2MB in flash, out of 8MB) is $7.50, but it has the same RAM in-package.

At those differences, that extra $2.50 is literally more than the client would make off each device sold!

And, of course, it's a $5 component. Using 32MB/1GiB as a reference point is basically entering the lower end of rpi territory; basically a different product for a different use.

The second issue is product differentiation. Products are priced based on what the market will bear. The actual cost of production matters very little once a developed product hits the market.

Something with 1GiB of RAM is intended for a very different market than something with 32MB of RAM. A manufacturer who simply goes ahead and places the extra RAM in (and adjusts the sales price to ensure that it balances out) is:

a) Leaving money on the table for those use-cases where the customer is willing to pay twice as much,

b) Missing out on sales because that $5 increase in price moved it out of consideration in the market it was in.

What you'll mostly find if you go ahead and replace 32MB with 1GB (or 1GiB, which is close enough) on some product is that no one uses it for the higher-spec use-case[1] and you've lost some sales on the low end.

[1] Because RAM is not the only upgraded component in the higher-level use-case products. I'm not too familiar with products in the > 512KB ram class, so take the following with a pinch of salt (i.e. I might be wrong). Typically the 1GB RAM products are used for a specific use-case. Those use-cases can't be done on the ~200MHz cores that come with the 16MB/32MB SBCs. It's cheaper to simply switch to a rpi-class of computer that comes with all matching components.

A quick search shows that there really aren't many sub-1GB devices anymore, anyway, other than industrial components.


> pretty sure the consumer doesn't care if the widget costs $21.55 or $21.59

Then charge $21.59, use the cheaper chip and make additional profit


any chance you'd like to share your PLC to bytecode stuff? I've been thinking of doing something like this too, even for say scripting of game events, or like you did for controlling the logic of I/O's. but sadly theres never enough time.


I'm afraid I don't have it anymore :-(

I might have something similar lying around on my previous PC which is in storage somewhere, but the "something similar" was for a short course I taught to interns and based on the Arduino itself.


What are the pros and cons of this compared to Johnny-Five?

http://johnny-five.io/


This project looks to be long-dead, so the advantage would appear to be that johnny-five is alive.


I'm amazed this joke appeared organically and wasn't forced. Seems too perfect.


I can’t speak directly for deadprogram and the status of his projects, but I know much of his activity of late has been working in this same space, but in Go:

* Gobot (https://gobot.io/)

* TinyGo (https://tinygo.org/)

* GoCV (https://gocv.io/)


Last commit was 8 years ago. I don't think anyone gonna use it now.


[flagged]


as an occasional tinkerer who's very fluent in JS, getting to play around in a known language is a bonus. Side projects that require learning or re-learning languages usually go nowhere because the language itself becomes the side project.


And?


I'd reach for Nerves.

https://nerves-project.org/


> I'd reach for Nerves.

Not applicable to the clear majority of IoT hardware; it's for embedded systems which are large enough to run Linux, basically.

From the website:

> Firmware sizes start in the 20-30 MB range.


Fine, there I'll reach for AtomVM.

https://github.com/atomvm/AtomVM


Seems kind of pointless. A flight controller that can run JavaScript fast enough would be a waste of resources.


JavaScript is a hell of a lot faster than Ruby, PHP, or Python, yet Python is mostly used for AI/ML. I think V8 is plenty "fast enough" (why wouldn't it be)?


I think their problem is not performance of JS, but efficiency. V8 is relatively resource heavy and if your microcontroller can run it - it is too powerful for what it is.


As if there's no downside to building everything in C


Just need to be jerk today, huh?


When it comes to performance, ML/AI isn't "written in Python," it's "configured in Python."


Sure and V8 is written in C++ and JS is just the scripting layer


No, that's not what I mean.

What I mean is that the performant libraries of the ML/AI python ecosystem are all written in C or another high-speed compiled language. The speed of the underlying execution is not meaningfully affected by the speed of the Python interpreter. If CPython 3.12 came out with a huge performance regression and ran 50% slower than 3.11, these ML programs would run at almost exactly the same speed they do today.


> not what I meant

> the libs are all in C

Yeah node uses bindings too, examples:

node-llama-cpp (C++)... node-SDL (C)...

Why wouldn't it also be true for mechatronics? Have some native bindings for C libraries then you can write Node or Python and still use the machine code where needed. Speaking of which:

> The speed of the underlying execution is not meaningfully affected by the speed of the Python interpreter

Wouldn't both JIT vs not JIT have an impact on speed, and also some of the "heuristical" stuff V8 does?

I wouldn't rule out JavaScript for mechatronics, everything ends up in JavaScript.


That's a good point.

I suppose my knee-jerk reaction was driven by "interpreted languages are not suitable for low-resource machines," because the fixed overhead of booting up a Javascript or Python interpreter/JIT are comparatively higher in embedded environments. But the space nowadays is a lot bigger/wider than the traditional "embedded" space (where 4KB of RAM is a luxury), so there's no reason a drone or robot can't have a few gigs of RAM and multiple cores to use. So Javascript/Python is probably fine for a huge category of mechatronics applications.

Thanks for discussing with me. :)


Because performant flight controllers are written in C, and often ARM assembly.


> Seems kind of pointless. A flight controller that can run JavaScript fast enough would be a waste of resources.

According to the docs (for both this and the Johnny-Five project), the JS ONLY runs on a PC-class computer. You connect your IoT device to the computer that is running the JS program, and the JS program then controls the device. The IoT device must be tethered to a PC of some sort.

I'm guessing that controlling PC does things like "set GPIO-$X to input", "read GPIO-$X, "set GPIO-$Y to output", "write GPIO-$Y", "read ADC-$A", etc.

Maybe they designed their custom protocol to also handle time-constraints (like clocking a signal at a certain frequency on a particular pin), or maybe counting transitions on a digital input for specific duration, so that you can mostly do what you'd expect to, but I wouldn't bet on it.

My understanding of these types of projects is that they don't compile the input into a state-machine that is downloaded to the device; they send each instruction as and when it occurs.

This is especially problematic considering that the example in linked page is for a drone taking off, flying, then landing 10s later. You better hope that your drone doesn't ascend so fast in that 10s that it is out of range by the time the `land` command is issued.


Has anyone actually built embedded devices with JS? Sounds terrible


Last commit: 8 years ago.


Author appropriately named deadprogram


I don't think this is being maintained.


...somewhat off topic... I seem to remember a JavaScript based system that could be installed on (old) mobile phones. This way one had easy access to all the phones sensors, wifi and could build simple GUIs to repurpose the phone. Does anyone know how it is called or where I can find something similar? Thanks =)


[flagged]


If you're pushing bits and bytes you might want tight control over the runtime and guaranteed execution time. If you're not, I think it's perfectly acceptable.


The same way its 'perfectly acceptable' to use a Canon to kill a mosquito.


Also, in many circles, known as “a larf”. In the immortal words of Kurt Vonnegut, “We’re here on this Earth to fart around—don’t let anybody tell you any different.”


And what's the problem with that?


Well, it's a particularly inefficient use of a delicate camera for a purpose for which simpler, more resilient, more purpose-appripriate tools exist.


> particularly inefficient

I'm optimizing for my own time.

> simpler

If you don't need fine grained control of memory how is a non-gc'ed language simpler?

> more resilient

How is it more resilient to use a language that requires manual memory management?

> more purpose-appripriate

Not an actual argument.


You seem to be taking my response to a question about what is wrong with using a “Canon” [sic] to swat a mosquito as if it was an argument about what that was a (typoed) metaphor for rather than a humorous observation about what was literally described.


But javascript isn't even a good tool, you are optimizing for literally nothing else then to not spend like 20 minutes picking up go or something.

Instead you pick literally the worst (of the popular) languages that is painfully inefficient.


> But javascript isn't even a good tool, you are optimizing for literally nothing else then to not spend like 20 minutes picking up go or something.

I know Go and Rust. I likely wouldn't use Javascript for embedded, the tool I use will be informed by whether I need performance and what language has the best ecosystem. If I can get away with GC for my app I will use it.


There is a difference between 'maximizing performance' and 'picking literally the worst tool for the job because I don't care lmao'


In many ways typescript is terrible, but I've been writing it for a long time now and I don't often write bugs in it that get shipped to production. I've never really run into bugs that I could blame on typescript being bad. I just don't do dumb things like comparing strings and numbers. So yeah, it has a bunch of weird foot guns and is just generally not efficient but

1. I can write code that's fast enough to solve the problem.

2. I can write it quickly.

3. I can hire a million people to work on it if I need to.

4. Every single company that releases a service releases a Typescript or Python library first.

Also from personal experience, using 1 language across all environments speeds up development massively. Especially early on in a project when the schemas are not set in stone. If every time you update a table you need to change 4 different files and update all your validation logic you will have a rough time.


When you say JavaScript, are you talking about runtime or language?


Both


And what is wrong with JS as a language in this case? Because there are runtimes optimized for low-level work.


IMO if you work on IoT, you are in the wrong place. But who am I to judge.


Naively said. Everything ends up in JavaScript.


Exactly my thoughts.


Urgh




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: