Hacker News new | past | comments | ask | show | jobs | submit | nuriaion's comments login

It's physically pressed down to the pads on the motherboard with screws.


There is an issue if you want to create a small laptop with a long battery life as SODIMM will use more space and power as LPDDR. (The SODIMM module and connector are very bulky and have a bad electical layout. CRAMM will be somewhere between the two)


I think that would actually be more expensive as you still have the all the hard problems. Like how do you isolate the power electronics. You can use air which is cheap but needs a lot of space. (i learned that you need 1mm per 1kV) You need a lot of space which has to be very secure. etc

Additionally you need to control 5000 converters without getting oscillations etc. (probably over glas) Also they will probably need a lot more space.


At very high voltages, I would expect the whole lot to be sealed in a polymer resin for life. 1 million volts through epoxy resin requires a spacing of just 1 inch (call it two inches for a safety margin). Then your whole converter station can be the size of a shed, and the whole project can fit within existing space on existing utility land.


Voltage is just a potential, like gravity. Even at thousands of volts you would still need just 5 volts to operate equipment in the usual way. Communicating between large voltages can be done with fibre optic (i'm guessing) or wifi.


How would you deal with the heat generated by the power electronics if everything is sealed in polymer resin?


Built in water cooling channels?


At these voltages, water is way too conductive.

You can use pumped oil though.


Good HiDPI scaling on my external Monitor is one of the reasons i switched back to os x. For me that works very good independent of the monitor manufacturer.


Any notebook with Windows and multiple Linux distros work with HiDPI scaling on any combination of my various docks, monitors, and cables.

Apple definitely does not "just work" in this case. They really "think different" as in different standards to ensure consumer lock-in in their walled garden.


IME, macOS handles mixed HiDPI better than any other OS. I'm currently using a MBP with it's HiDPI screen and it's driving an LG 4k and a Dell 1920x1200 - all work and look fine.

I've tried Linux a few times in the past and it's been a mess - particularly in a mixed DPI environment. It also seems to come down to individual apps rather than the desktop manager. Not long ago I really wanted to make linux worked and so was asking questions on forums on how to make HiDPI work well, and a common response was 'no one needs HiDPI'...got it.


We made it part of our ci to test if all files are formated with the formater and i bound the formater to the save operation.

So i can just write, save the file and everything is nice :)


Some years ago i worked on a part of a specialized steering system for a car. This was done with certified everything (Certified Compiler, Processor, a lot of paper work etc.)

This was a 16-Bit processor and the C-compiler had a "funny" bug. If you had a struct with 3 8Bit Values in a row and a 16Bit Value afterwards it would overlap the 8Bit Value with the 16Bit value:

  struct {
    int8 a;
    int8 b;
    int8 c;
    int16 d;
  }
In this case the variable c and d would have the same address. This was on a cpu where we didn't had a debuger (not enough memory left for it), we only had a serial port for debuging.


I guess someone told the compiler authors that the automotive industry was a unionized industry.


For anyone not well-versed in C a union does something similar to the bug mentioned. It's like a struct but using the same address for each member.


You mean a few variables in a union doing work of one variable? Yes, poor joke, I can see my karma burn....


A serial port is all you need, if you have room for a wee GDB stub. Then you get the full power of GDB.

I do this routinely where the target has 256GB of RAM, and (not incidentally) specialized network hardware, but no dev infrastructure except gdb-server (which provides the stub) and sshd. I build in a docker image that matches the target, but with dev tools, with the output bin directory sshfs-mapped to a directory on the target. I run the binary on the target under gdb-server, opening a socket listener. Then I run gdb natively on my dev machine, and `target remote server:61231` to attach to that socket. If I didn't have easy access to listening ports on it, I could ssh-tunnel one in.

So, a serial port and small RAM doesn't have to mean you have no debugger.


It sounds like the poster was targeting a microcontroller rather than a more generally capable "embedded" CPU. 16 bit CPU, probably on the order of 100KB RAM and code space total if you're lucky. No operating system in the common sense, although you might have some notion of task switching if you're fancy. "Wee" in that context would imply a footprint on the order of 1KB of code and maybe 100 bytes of RAM.

I assume, by saying there wasn't room for debugging functionality, the poster meant that the "jtag" or equivalent hardware port simply couldn't work for single stepping due to the particular architecture requiring compiled-in cooperation of the firmware, and they didn't have the kilobytes of memory to spare.

These days, it's becoming more reasonable to throw Linux based compute nodes at problems previously best served by microcontrollers. A more powerful CPU isn't a superset of a microcontroller, though. Microcontrollers are still necessary when you have "hard" timing requirements and you need to account for where your CPU cycles are going. Even seemingly "solved" problems like participating on a CAN bus is difficult for a Linux based node. For example, while you can easily purchase CAN interface boards for raspberry pi and send and receive messages, you are pretty much guaranteed to drop some percentage of incoming messages at realistic bitrates. All the boards use MCP2515 SPI CAN controllers, and the linux driver simply can't schedule SPI transfers in response to interrupts fast enough to avoid mailbox overruns inside the controller. Maybe it's somehow been cleverly fixed since I last looked at it though?


It is fairly common nowadays to run Linux as one task on an RTOS, and have other tasks manage the CAN controller and other devices that need a low latency response. Or, just to use a coprocessor for low-latency work, as is commonly done to manage wifi.

Routing gdb stub traffic through a hosted Linux to an RTOS task or coprocessor is not an elementary exercise, but is something an engineering student might be expected to implement, even as just part of the real project.


That's a nice way to get gdb!

In our case it would probably not have helped. We had a fixed old mcu board where the functionality grow over the years. We were fighting over bytes...


At which company is the largest Haskell team in the world?


StanChart, the Cortex team. Equivalent to the Slang/SecDB team at Goldman.


Also i would assume that the non-Tesla offerings will be cheaper. What i saw about the automobile industry ist that they are very good in building factories/supply chain's where they can produce as cheap as possible. The setup of such a supply chain takes years but when it's there i assume that the prices will be lower than Tesla's. (Also so far i know Tesla need's still a relativ big amout of work hours per car, 60h compare to 30h industry standard)


The space in an instruction is very limited. (If the representation of an instruction needs more bits you need more bandwidth, more cache space etc.) So it can be beneficial to only address 8 registers but then have a detection for spilling of spilling to RAM etc. (It can even be beneficial to specify only 2 registers (a = a + b instead of a = b + c) and replace a copy of registers with a rename.


Exactly. In principle even memory can be renamed, although I'm not sure any current CPU actually does it (there are rumors). It would be great if the actual SSA graph could be directly passed from the compiler to the CPU, but what's saved by getting rid of renaming would probably be used to handle the much harder decoding. It would probably have implications for context switching overhead.


Basically you have no guarantee where the pointer q is pointing to. Some compiler/static code analyzer will yell at you with this code.


isn't it guaranteed that is points sizeof(int) bytes higher than the address of b?

Whether something useful is behind that address is another question


There's a guarantee relative to the address of b. But there's no guarantee about the relative addresses of a and b themselves, i.e. where they are placed in automatic storage. So even setting aside the idea of optimizing the test away at compile time, there's no guarantee that the comparison result will be a specific value. a could be above or below b, and they are not necessarily adjacent objects.


From the C11 spec quoted in the article

> If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.

So although most implementations might produce an address that's relative to b, that's not actually guaranteed by the spec unless it's `&b + 1`.


... which it is in the program under discussion. Again, the address which has no guarantee relative to the address of b or the value of q is the address of a.


From reading the spec it seems that is undefined behaviour for anything other than `&b + 1`. That is one past the end of b is fine but not `&b + 2` etc.


I wonder why someone decided that accessing "one past the end" should be fine


Short answer: so you can walk a pointer down the elements of an array without having to have weird code to deal with the end.


It's explained in the addendum to the article.


thanks


Depending on your architecture and compiler it could be that everything is 128 bit aligned, so you would had "empty holes" in your memory layout. (For example some dsp's)


I don't think that address of anything even has to be a positive number, according to the C standard.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: