It's really a shame that VCs don't seem to have figured out how to do DD on science based startups.
The same is probably somewhat true in tech generally, but VCs in this area don't seem capable of doing the DD. They invest based on personality, connection or business background.
That's all great, but really what these early stage companies need is a CEO with a strong scientific background and focus.
In my experience, those CEOs often don't come across as strong enough to investors who are generally more comfortable investing in tech, which is easier for them to understand.
So my hope is that high profile failures make VCs wake up, and assess scientific startups differently. However I think it's more likely that they'll just avoid them all together.
The article mentions microcontrollers that use 100 milliwatts as the lower end of embedded CPUs.
There are actually microcontrollers that use around 1milliwatt for very low power applications. For example the msp430. TI have a neat video of one running using power generated from grapes:
I put one of the boards under a microscope and looked at
some of the ancillary parts. There's a 22 uF decoupling
capacitor. No real surprise there. It appears to be one of
those nice Kemet polymer tantalums designed for high-density
SMT applications.
The datasheet pegs the cap's internal leakage at 22 uA. That
means the capacitor draws a thousand times more power than
the dozing CPU.
Yes, I play with little robots built with microcomtrollers. For some of our little boards my friends and I make we leave off the LEDs because they consume more battery than the CPU. (Of course the motor makes up for that quickly...)
You can't run valgrind on the kernel (not because of the speed, but because the way valgrind works it's not technically feasible), but even if you could, the bad access only occurred when the particular type of packet was received. The kernel panic essentially was the kernel's run-time bad access detection triggering.
It was catched with syzcaller and kasan. syzcaller is a kernel fuzzing tool, kasan is "like valgrind, but for the kernel" (the kernel version of address sanitizer. in general I like to describe address sanitizer as "like valgrind, just better".)
It feels like failures on the scale of Segway are now pretty run of the mill. They "only" raised a 166MUSD [1]. Doesn't really compare to modern failures like Theranos, 686MUSD.
I agree with what was said from your links. The code was poorly written but that does not mean that this can't be improved once they get an active community. Sadly, right now they are inactive and I don't know why.
I think the complaints about peripheral source are valid, but they have had it running on silicon for months, so the answer to "will it work" is pretty clear.
"First of all, it is just a simple microcontroller and the implemented RISC-V instructions are not that significant for the purpose. It's like expecting the SIM cards used in mobile phones to be equivalent to PCs because they run "Java"."
Who is calling this a PC? OpenV seems to be marketing itself as a microcontroller, it keeps comparing itself to Arduino.
I was interested to be an active contributor for open-v but this project turned out to be like a school project. Tried to clone the mriscvcore and created my own testbench for the code then found out to have a simple critical bug on the handshake signals for read and write ports[0]. I don't know what is really happening to the authors but they seem to be inactive for a long time which is so disappointing.
I am still searching for an active RISC-V verilog implementation project for me to contribute through design and verification. For those who are looking for contributors you may want to drop me an email: my username at gmail dot com.
Why not picorv32? Size-optimized, open licensed (no 64-bit, but that's where you could come in and be a star!) Clifford Wolf, the author, also wrote Yosys and Icestorm, which you can use to synthesize picorv32 for the iCE40 FPGAs with a fully open source HDL toolchain. It's really nice and usable!
Clifford has also been doing a lot of cool work adding SMT-based verification tech for synthesis via Yosys, and even started working on a verification bench for RISC-V implementations: https://github.com/cliffordwolf/riscv-formal - he's also very nice and approachable IMO. I'm sure he'd appreciate some extra help.
Pedantry: but RISC-V is open hardware only from the logic level up.
The "transistor" level would require open source silicon layout software, and probably some level of NDA relaxation on the part of the fabs who surely won't allow a production mask set on their modern[1] processes to be released to the public. I don't see that happening any time soon.
[1] Mature/legacy processes actually do have public design rules available. The MOSIS "scalable CMOS" rules apparently work for real devices at the 180nm node or thereabouts.
I wonder, if the logic level was copyrighted under a sharealike license such as GPL, and the mask is a derivative work, does that make the mask subject to the sharealike license too? (I know that there exist "mask work rights" as a separate legal category from copyrights, but I'm not sure how a mask work right would interact with a sharealike copyright license...)
My informed amateur opinion is that no one really knows. There's no case law on this that I'm aware of.
Basically: the "shape of a transistor" (and highly tuned structures like SRAM & flash cells, etc...) in modern logic processes is itself a tightly guarded secret. Outside some crappy electron microcroscopy done by people like Chipworks and the much-prettier-but-really-spun pictures released by the fab's marketing departments, no one involved in the process releases anything.
Now, maybe that's because it's that the masks are ultimately copyrighted by the fab and protected. Or maybe if they leaked they'd be perfectly fine. But what really happens in practice is that before the fab will consent to make your chips for you, they force you to sign an elaborate NDA with no doubt huge contractual penalties.
Again, I don't expect this to change. Silicon fabrication at the mask level will not be open source any time soon, though maybe there's hope that open source tooling might arrive. We just won't be able to see its output.
Hardware isn't subject to copyright. There are places in China that can reverse engineer boards from hardware back to design files, so GPLing board designs is pretty pointless.
There are patents, but they don't protect designs or individual physical items, just processes.
There is IC mask protection, but that doesn't protect modified versions of the IC masks (if I understand it correctly):
Wait, so this is mostly a microcontroller chip, right? I keep hearing about its applications in smartphones, but I kind of doubt it could make it @ 160mhz... not to mention PCs.
It's an instruction set architecture with different extensions, a "family" if you would. Just like ARM has 32-bit Cortex M0..M4 microcontrollers, as well as ARM64 with MMU and the works.
These RISC-V silicon implementations on CrowdSupply are on the low end, RV32I (32-bits, integer-only). The SiFive one supports M (multiply) A (atomic) C (compressed, like Thumb for ARM) extensions. No, you cannot build a smartphone with these.
But there are other implementations. An example high-end implementation, 64-bits (you cannot buy silicon yet, but maybe you can burn your own FPGA) is BOOM:
Part of the problem is to produce a high-end RISC-V chip doesn't just need the RISC-V core. It needs all of the supporting IP as well which can be substantial (interconnect, system caches, memory controllers, peripherals etc). Plus to produce something that can complete with modern mobile APs you'll need to fab on a modern process (e.g. 28 or 20nm, though that's a bit long in the tooth if you're targetting things coming out rather than matching current technology, 16FF would be better) which increases costs substantially. Simple, slow, microcontrollers can be fabbed on older and far cheaper processes.
> but I kind of doubt it could make it @ 160mhz...
The second chip in those links is a Arduino-compatible MCU that already runs at 320mHz+, so it's got 2x as many cycles as you wanted!
> not to mention PCs.
There have also been demo runs of some 64-bit RISC-V boards, using the same FPGA core that 320mHz chip uses ("Rocket"), that have already hit 1gHz+ on old (think 180nm?) fab processes. It can certainly do GHz, it seems.
RISC-V is an instruction set architecture (ISA) specification. So the standard is really just a definition of some instructions and what they should do. The unique part is the fact it is an open standard and so does not require any licensing or royalties.
This means that anyone can then go away and build their own processor implementation to that standard. Intel could create their own chip for that instruction set and get it running at 4Ghz with your latest fab facility. At the other extreme I could sit at home and use Verilog to create a soft core that is downloaded to an FPGA and be up and running with a much slower implementation). Neither of us has to pay anyone for the right to do so and the code we write should work on both because it adheres to the same standard.
In contrast, if you want to create an ARM or x86 processor then you have to pay a license fee. Even if you implement the entire processor from scratch and use nothing from ARM or Intel except the instruction set specification, you still need to pay and it is not cheap. There are many situations where you want to avoid that cost. A researcher wants to experiment with a new idea, you want an internet of things processor but cannot afford the license cost of an existing processor. Or your Samsung and produce zillions of phones a year and saving that dollar per phone is worth having.
As the RISC-V standard is relatively new it means that the first designs to market are the targets that are easiest to create, so microcontrollers. SiFive are now moving up the scale and are working on a full processor with MMU that could be used to run something like Linux. I would expect to see something like that appear within a year or so.
These initial chips in crowdfunding are smallish/microcontroller-like, since they are easier to make and what people like to toy with initially.
RISC-V is an ISA that can be used for all kinds of chips eventually, just like there are ARM offerings from <100 Mhz microcontrollers to >1 GHz multi-core phone SoCs.
I have one. I'm writing a Forth for it at the moment, using the ISA simulator as well. It's nice hardware in terms of looks/stability, has arduino compatible pin layout, etc.
It has a lot of Flash for your images or whatever but very little SRAM, even compared to e.g. the Arduino Zero. Only 16KB. So that's quite unfortunate. I only had a Uno R1 though, so it's an upgrade for me in every dimension.
The clock speed is very high comparatively so it could drive some things that may've been out of reach, but they aren't entirely clear on the power usage. The Dhrystone/mW metric is obviously very good but if you don't need raw speed and little SRAM, it's unclear how well it fares power-wise. You could probably get away with less.
The tooling seems to work fine, including the OpenOCD-based debugger (though it has some natural limitations), and compiler toolchain. It'll be nice when the binutils port is upstream so I can get rid of a custom build.
I don't know about the Arduino IDE support. I don't use it. While it's pin-compatible, most Arduino libraries probably need some light modifications to work with the HiFive1 so you can reliably use stock shields/extensions. They have some examples (e.g. Adafruit capacitive display library) on GitHub of doing this. So you can use it with existing stuff if you get your hands a little dirty.
There are no gerber files or masks or whatever I guess, though the RTL they use is available, and they do have directions on how to synthesize the RTL for a small Xilinx Artix FPGA to develop the chip. The FPGA is about $70 I think so you could also go that route and use the soft-core version, though only if you care about HDL.
It's all very new, e.g. the original Dhrystone port on GitHub was a bit busted, but they fixed it very fast. Expect some roughness, but it's mostly been smooth for me since I just use GDB/GCC, OpenOCD and the simulator.
I'd suggest buying it if you want to experiment with RISC-V, support the project, and play with the new tooling -- but probably not as an Arduino replacement, if you have something like a Zero already. Unless you're willing to get your hands dirty, which maybe you are!
On the page: "While the onslaught of some social media hoo-hah will demolish some servers in the modern era, normal single or multi-thread use of a site will not cause trouble, unless there's a server misconfiguration"
I've been reading Cryptonomicon recently. There's an entertaining bit where one of the characters asserts that beards are "totems of the white male patriarchal privilege".
It's a bit silly. But it is interesting to note that every bearded person mentioned in this article is white.
Git's rise to become the de facto standard revision control system has been fascinating to watch.
I think it does provided real benefit over Subversion, but there are also smaller projects where the gain is marginal.
The tooling around (in particular GitHub) and perception that git is the "right" way to do things probably helped it achieve prominence. As well as early adoption by a few high profile projects.
It's well designed in the open source philosophy. It's designed for people who are spread all over the world, working together but in seperate, isolated repositories. There probably well are quite a few use cases where Git is not the best suited tool for the task. I don't doubt there are several situations where a team's workflow can be improved and sped up by everyone using SVN, especially if everyone works together and in the same building for the most part.
That being said, most complaints about Git are of its difficulty for newcomers, non straightforward workflow, and lack of clear, concise documentation. Once the concept is reasonably grok'ed, one would likely be hard pressed to find a developer who prefers a centralized system such as SVN over Git.
Is it that hard to understand that `git add` adds files to the list of files tracked by the repository (which is updated when changes are committed to it) and `git checkout` checks out a version that has been previously committed?
I mean, that's pretty much how all version control systems operate. Is it that hard to wrap anyone's head around the basic functionality that is common to any version control system in use?
No, it's not that hard to understand. It's also not how Git actually works. You're ignoring, or perhaps forgetting, the concepts of the index ("staging area"), and the many and various things which both these commands actually do, and the fashions in which these sorts of "oh, it's simple, really!" elisions betray those not yet experienced enough to recognize them for the overextended abstractions they are.
I'm talking about the same Git as the article author is. If you have a different Git in mind, please point me to it! Perhaps I would prefer to use that one instead.
He? Branching in GIT is natural thing to do where in SVN it is at least not user friendly.
"Subversion has no internal concept of a branch—it knows only how to make copies"
Now even if I do not need distributed features of git, I have VCS with local repo with git and I have easy branching.
So even if I work on my side project I use branching a lot because I can do cheap experiment branches, go back to previous state or pick what I liked in my experiment. For me it is just totally different approach to development. When I stopped using Subversion and started using GIT my world changed for better, GIT gives me more control. I can pick parts of files to be staged or pick what I need to be commited. Where partial commits with subversion is at least black magic.
So even for smaller projects it is like a whole world of difference for using GIT.
I don't understand the hostility towards it. These types of complaints like "oh if you misuse this tool that you didn't bother learning about, it does things you probably didn't expect" are basically completely useless.
Try randomly slapping your hands on a keyboard and writing software in <insert language>. Now complain that because you can't do that, <insert language> sucks. That's basically how I see these types of complaints about git.
In formlab's case they had a 1.8MUSD seed round prior to the crowdfunding. They raised ~3MUSD from crowdfunding.
This probably helped give them some runway to put together a series A. However they still raised an additional 50MUSD.
So, I don't think they're a great example of a company bootstrapping from crowdfunding. It's more of a traditional play, with some validation (and some runway) from crowdfunding.
The same is probably somewhat true in tech generally, but VCs in this area don't seem capable of doing the DD. They invest based on personality, connection or business background.
That's all great, but really what these early stage companies need is a CEO with a strong scientific background and focus.
In my experience, those CEOs often don't come across as strong enough to investors who are generally more comfortable investing in tech, which is easier for them to understand.
So my hope is that high profile failures make VCs wake up, and assess scientific startups differently. However I think it's more likely that they'll just avoid them all together.