There is an issue if you want to create a small laptop with a long battery life as SODIMM will use more space and power as LPDDR. (The SODIMM module and connector are very bulky and have a bad electical layout. CRAMM will be somewhere between the two)
I think that would actually be more expensive as you still have the all the hard problems. Like how do you isolate the power electronics. You can use air which is cheap but needs a lot of space. (i learned that you need 1mm per 1kV)
You need a lot of space which has to be very secure.
etc
Additionally you need to control 5000 converters without getting oscillations etc. (probably over glas)
Also they will probably need a lot more space.
At very high voltages, I would expect the whole lot to be sealed in a polymer resin for life. 1 million volts through epoxy resin requires a spacing of just 1 inch (call it two inches for a safety margin). Then your whole converter station can be the size of a shed, and the whole project can fit within existing space on existing utility land.
Voltage is just a potential, like gravity. Even at thousands of volts you would still need just 5 volts to operate equipment in the usual way. Communicating between large voltages can be done with fibre optic (i'm guessing) or wifi.
Good HiDPI scaling on my external Monitor is one of the reasons i switched back to os x. For me that works very good independent of the monitor manufacturer.
Any notebook with Windows and multiple Linux distros work with HiDPI scaling on any combination of my various docks, monitors, and cables.
Apple definitely does not "just work" in this case. They really "think different" as in different standards to ensure consumer lock-in in their walled garden.
IME, macOS handles mixed HiDPI better than any other OS. I'm currently using a MBP with it's HiDPI screen and it's driving an LG 4k and a Dell 1920x1200 - all work and look fine.
I've tried Linux a few times in the past and it's been a mess - particularly in a mixed DPI environment. It also seems to come down to individual apps rather than the desktop manager. Not long ago I really wanted to make linux worked and so was asking questions on forums on how to make HiDPI work well, and a common response was 'no one needs HiDPI'...got it.
Some years ago i worked on a part of a specialized steering system for a car. This was done with certified everything (Certified Compiler, Processor, a lot of paper work etc.)
This was a 16-Bit processor and the C-compiler had a "funny" bug. If you had a struct with 3 8Bit Values in a row and a 16Bit Value afterwards it would overlap the 8Bit Value with the 16Bit value:
struct {
int8 a;
int8 b;
int8 c;
int16 d;
}
In this case the variable c and d would have the same address. This was on a cpu where we didn't had a debuger (not enough memory left for it), we only had a serial port for debuging.
A serial port is all you need, if you have room for a wee GDB stub. Then you get the full power of GDB.
I do this routinely where the target has 256GB of RAM, and (not incidentally) specialized network hardware, but no dev infrastructure except gdb-server (which provides the stub) and sshd. I build in a docker image that matches the target, but with dev tools, with the output bin directory sshfs-mapped to a directory on the target. I run the binary on the target under gdb-server, opening a socket listener. Then I run gdb natively on my dev machine, and `target remote server:61231` to attach to that socket. If I didn't have easy access to listening ports on it, I could ssh-tunnel one in.
So, a serial port and small RAM doesn't have to mean you have no debugger.
It sounds like the poster was targeting a microcontroller rather than a more generally capable "embedded" CPU. 16 bit CPU, probably on the order of 100KB RAM and code space total if you're lucky. No operating system in the common sense, although you might have some notion of task switching if you're fancy. "Wee" in that context would imply a footprint on the order of 1KB of code and maybe 100 bytes of RAM.
I assume, by saying there wasn't room for debugging functionality, the poster meant that the "jtag" or equivalent hardware port simply couldn't work for single stepping due to the particular architecture requiring compiled-in cooperation of the firmware, and they didn't have the kilobytes of memory to spare.
These days, it's becoming more reasonable to throw Linux based compute nodes at problems previously best served by microcontrollers. A more powerful CPU isn't a superset of a microcontroller, though. Microcontrollers are still necessary when you have "hard" timing requirements and you need to account for where your CPU cycles are going. Even seemingly "solved" problems like participating on a CAN bus is difficult for a Linux based node. For example, while you can easily purchase CAN interface boards for raspberry pi and send and receive messages, you are pretty much guaranteed to drop some percentage of incoming messages at realistic bitrates. All the boards use MCP2515 SPI CAN controllers, and the linux driver simply can't schedule SPI transfers in response to interrupts fast enough to avoid mailbox overruns inside the controller. Maybe it's somehow been cleverly fixed since I last looked at it though?
It is fairly common nowadays to run Linux as one task on an RTOS, and have other tasks manage the CAN controller and other devices that need a low latency response. Or, just to use a coprocessor for low-latency work, as is commonly done to manage wifi.
Routing gdb stub traffic through a hosted Linux to an RTOS task or coprocessor is not an elementary exercise, but is something an engineering student might be expected to implement, even as just part of the real project.
In our case it would probably not have helped. We had a fixed old mcu board where the functionality grow over the years. We were fighting over bytes...
Also i would assume that the non-Tesla offerings will be cheaper.
What i saw about the automobile industry ist that they are very good in building factories/supply chain's where they can produce as cheap as possible. The setup of such a supply chain takes years but when it's there i assume that the prices will be lower than Tesla's. (Also so far i know Tesla need's still a relativ big amout of work hours per car, 60h compare to 30h industry standard)
The space in an instruction is very limited. (If the representation of an instruction needs more bits you need more bandwidth, more cache space etc.)
So it can be beneficial to only address 8 registers but then have a detection for spilling of spilling to RAM etc. (It can even be beneficial to specify only 2 registers (a = a + b instead of a = b + c) and replace a copy of registers with a rename.
Exactly. In principle even memory can be renamed, although I'm not sure any current CPU actually does it (there are rumors). It would be great if the actual SSA graph could be directly passed from the compiler to the CPU, but what's saved by getting rid of renaming would probably be used to handle the much harder decoding. It would probably have implications for context switching overhead.
There's a guarantee relative to the address of b. But there's no guarantee about the relative addresses of a and b themselves, i.e. where they are placed in automatic storage. So even setting aside the idea of optimizing the test away at compile time, there's no guarantee that the comparison result will be a specific value. a could be above or below b, and they are not necessarily adjacent objects.
> If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.
So although most implementations might produce an address that's relative to b, that's not actually guaranteed by the spec unless it's `&b + 1`.
... which it is in the program under discussion. Again, the address which has no guarantee relative to the address of b or the value of q is the address of a.
From reading the spec it seems that is undefined behaviour for anything other than `&b + 1`. That is one past the end of b is fine but not `&b + 2` etc.
Depending on your architecture and compiler it could be that everything is 128 bit aligned, so you would had "empty holes" in your memory layout. (For example some dsp's)