Too many people jumping into the "I wanna design a CPU too!" pool. Not nearly enough designing the needed microcontroller peripherals to have a working open hardware ecosystem.
The JTAG debugging hooks are a nice touch for a FPGA product though. Do the existing RISC-V silicon implementations not provide this?
Yes, silicon implementation provide the JTAG, but it's very likely their implementation will not be reusable for FPGA project (including JTAG debug things) because of their over complicated (but powerfull) solutions which would consume way to much area and restrict the FMax.
Then this specific VexRiscv ecosystem also provide a basic SoC with an multi master AXI4 inteconnect, SDRAM controller, embedded ram, APB3 interconnect, some slave like GPIO, UART, Timer, VGA. It's not incredible, but it's already a starting point ^^
See https://github.com/SpinalHDL/VexRiscv#briey-soc
For an FPGA implementation, it's often useful to hook the soft core into the FPGA's own JTAG controller, so that it's possible to program the FPGA and debug the core over a single connection. This is often not portable even between FPGAs, and certainly isn't portable to ASIC, but it makes development a lot easier, so...
Yes it would be, it is very easy to retarget the CPU to another memory bus.
Then the Wishbone one isn't "perfect" as it can't map memories which have latencies without performance penality. It would be oky for the data bus of the cpu with or without cache, but for the instruction, without instruction cache, peak CPI would be divided by the memory latency.
I've never understood why some people say a license is needed for something on which patents have expired. What's the legal basis, or is it bluff?
I can understand that copyright exists in the documentation that describes AXI, but that copyright covers only their documentation and not your own implementation (or is some derivation claimed?). I can understand that trademark might exist in the name "AXI", so don't use that name. Trade secrets seem to be moot, if you're not associated with whoever is keeping the secret. If none of the above apply, what basis is there for demanding a license?
Licenses still apply post-expiry, in the US, Australia and a few other places.
It didn't use to be this way, but around the 80s the big precedents for this.
Most of the time, if you were a licensee before the expiry, then you continue to be so post-expiry. If however, you were not, then you have no existing contractual obligation, and can use it.
However it is something you need to consult a lawyer on, before making any decisions, because this area of law is more complicated than most, and is growing in complexity.
Thanks. That makes sense, as a licensee signs a contract, so they will be governed by the terms of the contract, and the contract can presumably be written to outlast the patent. It suggests to me that when a patent expires then licensees are potentially disadvantaged compared to non-licensees. It highlights that one must make sure that licenses are written to terminate royalties on patent expiry, or think twice about licensing patents that are close to expiry.
They are usable externally. The Realview boards use it to connect a CPU chip with an external northbridge, but it's really rare. Even in that case, it's just because the Realview board is meant to prototype SoCs with the help of an FPGA.
Usable, but only barely. As AXI / APB3 are intended for on-die communications, they lack the line coding and error correction used by other busses like PCIe or SATA. On the upside, though, they take a lot less area to implement.
That's more a distinction between serial and parallel buses than on-chip/off chip. If you're not pumping a single signal to gbps speeds, you don't need the clock recovery and ECC that serdes style connections imply.
FPGA tools come with a lot of IP that may make use of hard blocks such as memory or network controllers. If you look at ARM SoCs, the peripherals are often bought in from IP libraries.
Sort of exactly my point: ARM SoCs mate a proprietary CPU implementation from an existing ecosystem with a bunch of proprietary IP blocks from that same ecosystem (that in practice tend to make up the bulk of the silicon area). Our "open" hardware excitement is limited to replacing the former while putting our heads in the sand about the latter, which IMHO is more important if you want to actually get any benefit (beyond "I made a CPU!" of course).
ARM and x86 may be proprietary designs, but their behavior is excruciatingly well specified and understood. If you want high quality open hardware, someone needs to start replacing the rather less well-specified/understood implementations of DRAM and I2C and SPI and USB and...
Parts of DRAM, USB, and GPIO controllers at least are probably going to stay closed for the short term. The analog properties of their PHYs are per process hard blocks typically that the fabs are super into keeping locked up.
I2C and SPI are pretty trivial though, I've written HDL for both of those. SPI is literally just a shift register and an chip enable signal.
USB has an actual PHY hardware spec for exactly that reason. And I'm not sure I buy your GPIO argument, not a hardware engineer but I've known several who all swear never to use fab-supplied GPIO blocks.
It may well be that DRAM's analog requirements are fab-specific (though I'd be a little surprised if it were that bad: these are full swing classic bus signals), but nonetheless most of the complexity in these controllers is in the logic side: clocking, refresh, bank mapping, ECC, etc... That's all stuff we could (and should) be writing in open source HDL.
> USB has an actual PHY hardware spec for exactly that reason. And I'm not sure I buy your GPIO argument, not a hardware engineer but I've known several who all swear never to use fab-supplied GPIO blocks.
It's not "what does the the external spec require", it's "how do we achieve that at a given fab process". Ie. how do you achieve specific slew rate etc. on a given process?
> It may well be that DRAM's analog requirements are fab-specific (though I'd be a little surprised if it were that bad: these are full swing classic bus signals), but nonetheless most of the complexity in these controllers is in the logic side: clocking, refresh, bank mapping, ECC, etc... That's all stuff we could (and should) be writing in open source HDL.
There's plenty of open source implementations for the digital side of things. On this very github project:
You're overselling the complexity here. Yes: mating analog specifications to line drivers/receivers requires per-process design. And at the very top end of achievable technology (USB 3.1, PCIe 4, DDR4...) these designs are likely to be NDA-level IP you have to get from the fab or its partners. But come on, "slew rate" matching is a matter of looking up some capacitance and I/D curves and plugging them into a SPICE model. Literally everyone does this in school.
But that's not remotely where we are with RISC-V designs in the market, where we'd be very happy to get a LPDDR2 controller and a USB Hi-Speed link with an open interface that doesn't require junky proprietary drivers or crazy workarounds for undocumented hardware bugs.
(Also FWIW: I can't find a DRAM controller implementation in that VexRisc tree. That line you point to looks maybe like an abstraction layer for plugging one in that's already on the FPGA?)
> You're overselling the complexity here. Yes: mating analog specifications to line drivers/receivers requires per-process design. And at the very top end of achievable technology (USB 3.1, PCIe 4, DDR4...) these designs are likely to be NDA-level IP you have to get from the fab or its partners. But come on, "slew rate" matching is a matter of looking up some capacitance and I/D curves and plugging them into a SPICE model. Literally everyone does this in school.
The spice model is closed and behind NDA. Seriously, can you point me to spice models of even ancient process nodes that are still manufactured? Let's say TSMC 0.13umG just to pick one out of the blue.
> But that's not remotely where we are with RISC-V designs in the market, where we'd be very happy to get a LPDDR2 controller and a USB Hi-Speed link with an open interface that doesn't require junky proprietary drivers or crazy workarounds for undocumented hardware bugs.
Here's a USB2 controller that's existed for more than 15 years on opencores:
> (Also FWIW: I can't find a DRAM controller implementation in that VexRisc tree. That line you point to looks maybe like an abstraction layer for plugging one in that's already on the FPGA?)
It's imported from the spinal standard library. It's underlying implementation is here:
You've lost me on what you're arguing. I'm saying "we want to spend more time integrating open hardware designs into RISC-V devices and less time making new CPU cores in our fun new HDLs".
And you're saying "we can't" because... why? I know that junk is there on OpenCores too. I've looked at it. I've synthesized some of it. No one uses it on silicon. That's the part we need to fix. And it's not because of SPICE parameters being behind an NDA.
There's no reason you can't plug a USB state machine into a per-process line driver, that's the way it works everywhere (even on FPGAs). Synopsys et. al. ship their verilog with the logic carefully isolated from the semiconductor process dependency (for obvious reasons). Don't tell me that open hardware device vendors can't do the same thing. They just haven't, largely because existing open source people are spending their time making CPU cores instead of integrating a SoC (and software stack) made up of open designs that can be plugged into fab-supplied analog blocks in an obvious way.
> You've lost me on what you're arguing. I'm saying "we want to spend more time integrating open hardware designs into RISC-V devices and less time making new CPU cores in our fun new HDLs".
Why don't you quit being an arrogant armchair architect prick on Hacker News, telling everyone what they should and should not be doing, and do it yourself.
This is not an open hardware project, this is a soft CPU core intended for final use in an FPGA: that is, not intended for manufacture. The best memory controllers and GPIO on your FPGA are the ones which are burned in at the fab. Why spend precious time developing a memory controller which will ultimately underperform the one you already have as part of your FPGA? To satisfy some dood on HN?
If it's so important and you're so disappointed with the quality of published peripheral controller HDL, then surely it's your job to show us all the right way.
Yikes. Personal attacks aren't allowed here and we ban accounts that do that, so please don't do that again. Your comment would be just fine if it were just the middle paragraph.
Perhaps a solution would be if the verilog cpu project provided a few specific model numbers for each part (obviously, choosing ones that are commonly available and lesser cost), so that public contributors can work towards a common implementation for said part.
I mean, the issue is that you're not going to find the specifications for the PHY stuff anywhere. There's no model numbers to clone. It's literally things like how do you etch out capacitors in the fab's process? What's the electrical properties of their different dopants? There's not really any models to clone without cloning the whole fab.
Our best option IMO is to wait until Moore's law hits more of a standstill, when fabs become more of a commodity and they're less secretive about the underlying process rules.
We're not talking about achieving on-die chip speeds at the picosecond range here. We're talking about interface specifications between the CPU and the DRAM, USB, etc -- these things can be designed for based on specs, and you don't need a trise/tfall equal to 10% of your FO4 inverter delay to achieve this. These are board level specifications which can be achieved with medium tier off the shelf components. And if they can't be achieved at the latest DDRX specs, then just design for one generation behind to at least get something going in the community.
Kinda off-topic. SpinalHDL is the second HDL, that I come across, being implemented in Scala. It seems that every popular programming language has at least one HDL implemented in it these days. Any obvious reasons for this trend?
Honestly, I'd wager it's because Verilog and VHDL suck in a lot of ways basically. They work, but almost anything has better abstraction and reuse capabilities, even embedded DSLs or bespoke compilers or whatever, and offer better feedback loops during development. REPLs help a lot when building big circuits out of smaller ones. Being able to use a package manager to grab and manage SoC/IP components is convenient, etc.
Most of these DSLs tend to work at the level of RTL as opposed to something like "high level synthesis" where register usage is inferred, too (OpenCL, C, etc). So depending on how it's designed the results can be pretty close to hand-written code IME, without much overhead. They're more like "Super RTL" as opposed to real "high level" languages...
I suspect you're right, but it makes me sad. I really like verilog as a language and after working with it for a few years wish it was possible to program desktops in a similar way.
Everybody wants a new HDL but there's no standard yet. Embedding it in your favourite language is the obvious solution, and hence there will be at least one per language.
Honnestly, i haven't realy think about it.
Not a constraining one, but more something like :
Do what ever you want with it, but if you find a bug, please tell me, and if you use it in a project which has a lot of money, please, share a bit with opensource guys, don't be too greedy ^^
The JTAG debugging hooks are a nice touch for a FPGA product though. Do the existing RISC-V silicon implementations not provide this?