Half the comments here are talking about the vtuber herself. Who cares. It's been talked before. Just imagine if half the thread is discussing what gender she is. What I am interested in is the claims here https://asahilinux.org/2022/11/tales-of-the-m1-gpu/#rust-is-.... (what is it called if it comes with a proof?).
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?
In C? No, not unless you write your own scaffolding to do it.
In C++? Maybe, but you’d need to make sure you stay on top of using thread safe structures and smart pointers.
What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
So you’re not spending time chasing oddities because you missed a variable initialisation, or you’re hitting a race condition or some kind of use after free.
While there’s a lot of people who say that this slows you down and a good programmer doesn’t need it, my experience is even the best programmers forget and (at least for me), I spend more time trying to reason about C++ code than rust, because I can trust my rust code more.
Put another way, Rust helps with reducing how much of the codebase I need to consider at any given time to just the most local scope. I work in many heavy graphics C and C++ libraries , and have never had that level of comfort or mental locality.
For me it isn't even that it catches these problems when I forget. It is that I can stop worrying about these problems when writing the vast majority of code. I just take references and use variables to get the business logic implemented without the need to worry about lifetimes the entire time. Then once the business logic is done I switch to dealing with compiler errors and fixing these problems that I was ignoring the first time around.
When writing C and C++ I feel like I need to spend half of my brainpower tracking lifetimes for every line of code I touch. If I touch a single line of code in a function I need to read and understand the relevant lifetimes in that function before changing a single line. Even if I don't make any mistakes doing this consumes a lot of time and mental energy. With Rust I can generally just change the relevant line and the compiler will let me know what other parts of the function need to be updated. It is a huge mental relief and time saver.
I agree that Rust is the better language because it gives you the safe tools by default.
Smart pointers are no panacea for memory safety in C++ though: even if you use them fastidiously, avoiding raw pointer access, iterator invalidation or OOB access will come for you. The minute you allocate and have to resize, you're exposed.
Yeah that’s definitely true. Anytime I use any C++ api that I did not write or analyze at my best is a shotgun waiting to go off. Hell, even then, that assumes I know enough to catch everything. I’ve recently been doing security analysis of some very popular repos and C++ is terrifying at times.
Rust isn’t perfect but it gives me so much more trust in everything I do.
Additional advantage of Rust is the extensive macro system. The ability to generate a bunch of versioned structures out of a common description, all with their own boilerplate and validation code, is invaluable for this kind of work. Some of it can be done in C++ with templates as well, but the ergonomy is on a different level.
> What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
For what it’s worth, the same is true of Swift. But since much of the original Rust team was also involved with Swift language development, I guess it’s not too much of a surprise. The “unsafe” api requires some deliberate effort to use, no accidents are possible there. It’s all very verbose through a very narrow window of opportunity if you do anything unsafe.
I have such a love hate relationship with Swift. It genuinely has some great ergonomics and I see it as a sister language to rust.
I just wish it was more cross platform (I know it technically works on Linux and windows…but it’s not a great experience) and that it didn’t have so much churn (though they’ve stopped futzing with the core components as much with Swift 4+).
I also wish it was faster. I did an Advent of Code against some friends. I picked Rust, they picked Swift. The rust code was running circles around their Swift ones even when we tried to keep implementations the same.
Anyway, that’s a rant, but I think to your point, I feel like Swift could have been as big as Rust..or bigger given the easier use, with many of the same guarantees. I just wish the early years were more measured.
>The rust code was running circles around their Swift ones even when we tried to keep implementations the same.
I've done Advent of Code a few years -- even Javascript implementations, if using a good (optimal) algorithm, are highly performant, so I'm suspicious of the claim. In most AoC problems, if your code is observably different between languages, it's the fault of the algorithm, regardless of language. But perhaps you are referring to well-profiled differences, even if there are no observable differences.
That said, in projects other than AoC I've compared Swift to C++ and it's hard to deny that a low-level language is faster than Swift, but Swift itself is certainly fast compared to most dynamically typed or interpreted languages like Python, JS, Ruby, etc. which are vastly slower than anything in Swift.
Swift is fast but it’s both easy to accidentally write slow code and there is a not-insignificant amount of overhead to the reference counting and constant dynamic table lookups.
When I say Rust ran circles around it, I mean the difference was not noticeable unless timing, and was the difference of 200ms vs 400ms or a 3 seconds vs 5 seconds , so nothing to write home about necessarily.
That is not the first time I want to understand a bit better the performance difference today between the approaches of Rust, without a garbage collector, ARC on Swift with the reference counting and other garbage collected languages, such as Javascript.
I know Javascript have an unfair advantage here, since the competition between V8 and the other javascript cores is huge over the years and garbage collection on JS is not often a problem. At least I see more people struggling with the JVM GC with its spikes in resource usage.
I've also heard that the erlang VM (be it written in elixir or erlang itself) implements GC on a different level, not to the global process, but on a more granular way.
Is there a good resource that compare the current state of performance between those languages or approaches?
Just like UNIX kernel code is a strange subset of C.
I always find strange how "kernel C" is C, even though ISO C would bork in kernel space, but doing a similar C++ subset is pointed out as not being C++.
I'd also call kernel C strange. Mostly justified, but there's no particular reason the memory allocation call with the same behavior as malloc() in IOKit isn't named malloc().
The history of Ada kept it from getting widespread attention. Sometimes it’s just about the right time and place.
Plus there’s something to be said for cargo being a killer feature for rust. Easy build configuration , easy package management and access to a large ecosystem of libraries.
Between that and the book..:As steep a learning curve as the language itself has, the actual on-boarding process is way more accessible than many other compiled languages.
Sure, but let's not pretend it is the very first time anything better than C and C++ came to be, and Ada is only a possible example.
Rust sucess story is briging Cyclone type system into mainstream, and in such a way that other languages (even those with automatic management runtimes) started looking at affine and linear type systems for low level performance optimizations.
However for explicit type conversions, bounds checking in strings and arrays, modules, co-routines, assignment before use,... there were plenty of alternatives, which for one reason or the other didn't took off, most not really technical related.
I am curios to know if the trait system helps a lot with mapping the underlying kernel features/quirks? Which language is better at creating abstractions that maps closer to how the kernel works?
Hmm , I’m no kernel writer but I don’t think trait’s necessarily offer anything in rust over C++ where I’d use polymorphism with abstract interfaces.
When I’ve written lower level components the things that have really been godsends outside the safety features are things like enums (much more powerful than C/C++ unions/enums and much more ergonomic than variants).
And even though I said traits don’t offer too much more than C++, one thing I think they do really offer is when combined with generics. Rust generics let you define trait requirements better (though not as extremely flexible) as C++ concepts/constraints.
I have a lot of experience in C, a lot of experience in C++, and some experience with Rust (I have some projects which use it). My opinion is that it's true, and the other comments are good explanations of why. But I want to point out, in addition to those: There's a reason why Rust was adopted into Linux, while C++ wasn't. Getting C++ to work in the kernel would almost certainly have been way less work than getting Rust to work. But only Rust can give you the strong guarantees which makes you avoid lifetime-, memory- and concurrency-related mistakes.
You can't underestimate the amount of personal hatred that Linus and several other linux maintainers have for C++. I can't really say I blame them - C++ before C++11 was a bit of a nightmare in terms of performance and safety.
I'm not exactly C or Rust expert so better to check
@dagmx comment for that, but I know some C++ and worked with networking enough to know some pitfalls.
Talking of C++ it can be really solid to work with your own data structures where you control code on both ends. Using templates with something like boost::serialization or protobuf for the first time is like magic. E.g you can serialize whole state of your super complex app and restore it on other node easily.
Unfortunately it's just not the case when you actually trying to work with someone else API / ABI that you have no contol over. Even worse when it's moving target and you need to maintain several different adapters for different client / server versions.
Possible? Definitely. Easier? Probably not. At least for the most part, there are a couple things which C(++) can sometimes be more ergonomic for and those can be isolated out and used independently.
watching a virtual persona stream their development of their M1 GPU drivers is one of the most cyberpunk things I've ever seen! it's easy to forget that this world is looking closer and closer to those dreamed up by Gibson, Stephenson, etc. what a time to be alive.
I like your optimism, but it seems more like a Phillip K. Dick novel to me.
>In 2021, society is driven by a virtual Internet, which has created a degenerate effect called "nerve attenuation syndrome" or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
It's an interesting set of tradeoffs - vtubing has made it possible for people to be on-screen personalities who normally would not able to as easily because it can be very hard to overcome problems with your IRL appearance. That stuff really matters if you want to succeed on YouTube or Twitch. In comparison if you want to be a vtuber, there are relatively affordable ways to grab a stock model and customize it. You can also just commission a custom one from an artist and rigger - though I think the cost of that is sadly out of reach of an amateur it's not as high as you might assume.
If you stream without a face camera at all it generally hurts your ability to grow an audience, and unfortunately our society is still pretty focused on appearance so if you don't look great you're going to potentially get a lot of toxicity in your chat. A vtuber avatar acts as an equalizer in this sense and also lets people express their personality and aesthetics visually in a way that might not otherwise be easy - they can pick eye and hair colors they think represent them without having to use colored contacts or hair dyes, etc.
A few different people I know found that having a vtuber avatar made it much easier for them to get into streaming regularly and it did grow their audience, so I'm happy to see the technology catch on and improve.
> It's an interesting set of tradeoffs - vtubing has made it possible for people to be on-screen personalities who normally would not able to as easily because it can be very hard to overcome problems with your IRL appearance.
That's not the reason most of the popular ones do it. (Ironmouse, sure, but not anyone else.)
Most of the bigger ones are corporate characters, so they're actually forced to hide themselves and it's more like they're uncredited actors. Besides that, it gives you privacy, avoids stalkers, means you don't have to do your makeup or get dressed (up or at all), things like that. Appearance still definitely matters though, but now it's your voice carrying you and not your face.
Small nit but I was confused, I think Johnny Mnemonic is Gibbson? And I had to look up NAS, I think that part of the movie not the book. I think we have another couple decades before androids and mood organs of Phillip K dick at least but I could be wrong. But parts of Gibbsonian cyberpunk is already here!
You are so right. Bizarre. I thought that was Phillip K. Dick for all of my life I think. Even worse because Johnny Mnemonic is one of my favorite movies! I’ve got the Black Shakes worse than I thought.
I was shocked to read this comment because I interpreted your original comment as meaning a Philip K Dick novel would be optimistic compared to the cyberpunk dystopia in which we find ourselves now. lol
Anyway it's pretty fair to confuse the two I think, there can't be any way Johnny Mnemonic wasn't influenced by Blade Runner (I'm guessing it's true of both the movies and of the books). Very similar themes and protagonists at a certain level of abstraction.
Johnny Mnemonic was dead tree published in 1981. (I have the issue of OMNI)
Blade Runner, the movie, was released in 1982.
And PKD's DADoES was published in 1968.
Honestly, IMHO, Blade Runner in movie form shares more DNA with Alien than proto-cyberpunk. They're both character studies rather than world-building exercises, albeit with massive credit to Ridley Scott and the design team that their "background" world is more lifelike than many world-centric films.
But Blade Runner fundamentally lacks the multi-level cutthroat competitive aspect of cyberpunk. Oddly enough, a quirk that frequently recurs in British scifi, in contrast to the more free market American themes.
Aesthetically it looks like the worlds imagined in a Phillip K. Dick novel, but none of the actual dystopian aspects are present in what GP described (rampant poverty/class disparity, environmental destruction, etc.)
I don't think someone sharing their craft through a virtual avatar is any more responsible for these things than the flying cars from Blade Runner would be.
A polity with an outmost shell of no bs ic spooks in a ratio of twenty to one cybersec defense to offense. There is the problem of sciengineers conceiving in the labs photonic computing but the committee member wage/salary slave cuts cost corners (or not but bloats up on unnecessary complexity) and we get the worsest join on the venn diagram in the industry spec.
Can someone explain this vtoon trend to me? It doesn't seem to be driven by anonymity because their real name is easily findable, so I assume it's something else? It seems very common, especially in certain communities.
In case of Marcan/Lina I got the impression that he created Lina just for fun. It started as an April fools joke (Lina 'took over' Marcan's live stream), but Marcan seems to enjoy it a lot, even going so far as contributing to the Inochi2D software (used to render Lina) to improve all sorts of facial features.
I don't have the impression that in Marcan's case it was ever about anonymity, it is more about a creative expression.
Up until Lina's introduction on April 1st, I had never seen a vTuber stream, and I must say it is quite fun to watch. Though personally I wish Lina's voice is tweaked a bit, because it can be hard to understand what she is saying.
What I like about Lina and Alyssa is the increase of visibility of LGBT+ people in the more "hardware" part of software development.
As your get more low level, less visual and more niche, there's less and less diversity of software developers. And everyone should feel welcomed into hacking the kernel.
Everyone being welcomed into contributing to the kernel can result in a natural lack of diversity, as it selects for people interested in technical programming.
It is the ham-fisted attempts to increase diversity for its own sake have ended up being discriminatory. [1]
If there are barriers stopping people contributing, these should be removed. I expect these to be economic barriers, rather than those based on particular characteristics.
Why is it that whenever gender and identity is mentioned in a technical community suddenly people start talking about "ham-fisted" and forced inclusions?
There's nothing forced about people getting into programming and becoming relevant for it, that just by chance it turns out they are not cis+heterosexual.
Nobody in the Asahi crew was "a ham fisted attempt to increase diversity".
The only forced thing here is the "forced inclusion" topic. That, for whatever kind of reason, people stubbornly keep trying to bring to the front.
I think the point is about many engineering orgs adding "minorities" for the sake of PR rather because these "minorities" are great resources.
It is obvious that there's terrific engineers and professionals regardless of sexual orientation and we have countless proofs behind it, last but not least the linked article and many of the other asahi contributors.
But when the likes or Google or Facebook are obviously hiring because you are a minority it's when the solution to gender or sexual discrimination is simply...reversed.
> Why is it that whenever gender and identity is mentioned in a technical community suddenly people start talking about "ham-fisted" and forced inclusions?
You answered your own question; it’s a technical community. Centering political activism is counterproductive to the fundamental telos of being a technical community.
Existing members, unsurprisingly, push back against colonization of their community by those more interested in gender ideology or identity politics than technology.
My point was criticising your underlying assumption that the lack of diversity is due to an unwelcoming environment, presumably as a result of intolerance to certain characteristics.
I might be wrong here, but I have always been under the impression that they are the same person. I didn't even realise that is was supposed to be a secret.
Simple things like Lina en Marcan using the same hostname for the development machine (Raider), using the same IDE (Kate), using the same bash aliases (the one I like most if 'stfu' that appears to suppress terminal output), and a huge overlap of knowledge. Then there is also the same fast-paced speaking style that Marcan has.
I did notice Lina always (rather jokingly) denying being Marcan, especially during the April 1 debut, but I always assumed that that was just part of the joke.
Again, I might be wrong here, maybe they are not the same person after all, but as far as I can tell it's just Hector Martin having fun.
> Isn't it the opposite, though?
You mean Lina being first, and Marcan being an alter-ego? Marcan (Hector Martin) is a real person, he has done quite a few interviews on camera. https://youtu.be/dF2YQ92WKpM?t=496
Some people just prefer their public persona to be in the form of an avatar instead of their real face. They want to have something there to represent themselves instead of just streaming a screen and nothing else, but they would rather that representation be an avatar or character rather than their physical selves.
Personally, I like the idea. IMO, we lost something when it became normal/expected to link our real-life self to our internet selves. We arguable gained some unpleasant things too (thinking of Facebook here).
A persistent avatar/alias as an internet persona seems to match the real-life to internet-life relationship better. IE, for many, they’re different and rather separable aspects of life.
There was a Black Mirror episode that makes plain the downside of a virtual representation: it can be taken from you and manipulated by someone else. The persona becomes a sale-able commodity, and can be separated from your control.
I remember hearing that this exact thing happened a few years ago in either Japan or South Korea.
There was a talent agency that had three-four performers who took shifts streaming games through a shared vtuber persona who had build up some popularity. At some point the agency fired the performers and replaced them with a new performer who didn't have the same performance.
The fans weren't told but it was immediately apparent, so they started demanding the changes to be reverted. The fans were ignored and both the vtuber's popularity and the agency fell back into obscurity.
I guess the lesson here is that personas are personal.
Also Japanese viewers like this better, and you don’t have to look good, and if you’re streaming from inside a soundproof box in summer without AC then you don’t even have to wear clothes.
If you were the Genre of person who got rude / explicit / insulting comments whenever you showed your actual face on camera, the vtoon trend would be quite easy to understand
Hector Martin (assuming they're the same person) looks perfectly fine on camera. They're an average looking person but I fail to find anything anyone would criticize about his looks.
My first response was a bit curt so I've removed it. (Edit: I can't seem to remove it.) Yes I'm aware a lot of women get harassment when they show their face streaming. It's one of the reasons it's so popular in Japan for women to be vtubers rather than show their face while streaming while the popularity for men doing it is significantly less (though there's still plenty who do it without changing their voice).
However Hector Martin is not a woman living in Japan and they look perfectly fine/average.
It’s like getting a specific haircut, choosing what model of glasses to get, or getting a nose job, or a tattoo. Or even just picking what style of clothes you want to represent yourself in. I.e. it’s simply choosing your appearance, using more modern technology.
Number one: you're using an overloaded term, but I suppose that your specific social conditioning has given you a specific concept of the definition of normal self expression that this clearly falls out of those boundaries.
Number two: this person's goal CLEARLY isn't to fool anyone, it's not a deep fake video, and it's frankly none of your business or concern that they enjoy using an animated avatar.
When they were new, most people thought of nose jobs (and other cosmetic surgery) as being weird and creepy. Some people still do. What is “normal” is a function of commonality, and at least VTooning does not cost a fortune and risks neither complications nor needing further surgery for years to come.
Very well articulated comment from someone with highlights such as
"you write for the Gaurdian and relentlessly spout feminist propaganda"
"destruction that activist hooligans created in Portland"
"Neurodiverse isn't a real medical term"
Sounds like you've got a very particular axe to grind from a very particularly narrow world view. I hope one day you learn to enjoy your fellow human beings for who they are
Don't forget to dissociate the concept of virtual worlds which already exist and are quite popular (MMOs etc.), and the idea of a virtual world owned and imagined by Zuckerberg which has been a terrible failure so far.
The opinions around facebook’s metaverse are hilarious when things like Second Life, WoW, EVE, VR Chat, and even Roblox have been so massively popular yet unrecognized for what they are.
I kinda think to some degree those words are the reason those opinions are the way they are. The metaverse is being sold as this new, groundbreaking, fantastic thing, when they've already existed for decades.
Also, VRChat is infinitely better than the Metaverse, does not require you to sell your soul to Facebook, does not require a branded headset (OR ANY AT ALL, YOU CAN PLAY IT 2D), runs well, HAS LEGS ON YOUR AVATAR FFS, and massive community support in and out of game.
Also it didn't require billions in investment. Facebook could have literally burned $900 million on hookers and cocaine, and then thrown $100 million to buy VR chat, and be better off than they are now.
I feel that Facebook's metaverse push is a desperate attempt to reverse the evolution of our identities from baseline, offline, unitary identity towards constructed and abstract, online, contextual, anime girls form. I don't know precisely why it's always specifically anime && girls, but everything else seems to fall face first so I guess it's survival of fittest.
I agree with everything except for the billions of dollars part. It seems quite obvious (but maybe I'm wrong?) that most of that investment went into the hardware that Meta has been developing over the years, and some of the "platform" capabilities that surround that hardware (think their app store for example). Sure, the virtual experiences are part of it. But I think it's not where the majority of the money went.
They currently have the most popular VR hardware platform. And it's the only piece of hardware that Meta really has, everything else they make is software. If Meta wants to keep growing, being the leading producer of "the next big thing" in hardware would certainly help. Zuckerburg is betting on VR/AR being that next big thing. Only time will tell I guess. But I find it a little weird when I see people commenting on Meta's investment as if all they did was create a basic second life clone. They're building some of the most innovative consumer hardware at the moment. I'm not hugely into it because Meta doesn't really seem like the company I'd like to entrust with cameras pointing at my eyeballs, but you can't deny that the hardware they've built so far is quite impressive.
They've already existed, and had fun, engaging content. Meta in that sense is putting the cart before the horse. You can't convince us all to get into this world and then figure out what we're supposed to be doing there later on.
Everyone just pretends there wasn't a time when people were spending years of their life in an MMO.
That was ruined by games trying to appeal to more mainstream people (with less hardcore Features like losing all your possessions upon death) and the micromonetization strategies that everyone hates.
I wonder if VR will even have games like early Everquest, WoW, EVE, etc but in VR.
The difference between Facebooks Meta and other worlds is the difference between getting a billion+ people into the same virtual world. How big is Second Life? or Roblox?
Facebook is going to struggle to get a billion people on to their website/app soon. I really don’t see this many people wanting to use a shitty VR chat app.
I didn’t make myself clear enough, I was talking about people brushing off the core concept as intrinsically unfeasible. Obviously their specific version is tonedef and lacking features but the baseline concept is proven valid by those other versions.
Vtubers are popular because they aren’t in a virtual world you have to go visit. You just open up YouTube/Twitch and they stay in that little window without the possibility of getting trapped in an MMO where you die in real life if you die in the game.
Although some of them are designing “virtual worlds” as an excuse to get around YouTube moderation and taking 30% of superchats.
Same thing was said about full touch screen iphones circa 2007.
VR/AR just hasn't been done right as of now, but its getting close. Demand is there. Imagine virtual schooling during time like Covid, but instead of Zoom, kids actually see each other in VR and can interact with each other.
Yeah it's pretty great, though I hope it won't be leveraged to reduce physical presence in social settings. The real world is still the place to be when possible, but it would be great not having to fly across the world for face to face meetings. Teams is just not good enough. VR might eventually become so.
The m1n1 hypervisor specialised for debugging is a pretty genius idea. Is anyone aware of anyone else taking a similar approach? Seems like it would be a pretty generally applicable technique and would make OS/hardware driver development a lot more approachable.
Even before true CPU-supported "hypervisors," there was shim software like SoftICE that worked similarly to m1n1 in that you would run an OS underneath and then use a supervisor tool to trace and debug the OS under inspection.
More recently, it's fairly common to use a hypervisor or simulator for kernel debugging in device driver development on Windows via Hyper-V.
A lot of Linux driver development is done using qemu as well, although this is usually more targeted and isn't quite the same "put a thin shim over the OS running on the hardware" approach.
The flexibility and I/O tracing framework in m1n1 are pretty uniquely powerful, though, since it was built for reverse engineering specifically.
Some developers used user mode Linux for driver development, and I think some development has happened on the NetBSD rump kernel more recently. I find the work that goes into building this kind of tooling all pretty impressive.
The nouveau project used a kernel module to intercept mmio accesses: https://nouveau.freedesktop.org/MmioTrace.html.
Generally speaking hooking onto driver code is one of the preferred ways of doing dynamic reverse engineering. For userspace components, you can build an LD_PRELOAD stub that logs ioctls, and so on.
Idea wise actually S/360 run on hardware microcode and all these idea of virtual machine and hypervisor came from an unauthorised development called CP67 or later VM. IBM used it for development MVS etc. as some hardware is yet to be built for certain features.
But the modern day these development is crazy.
How can yo manage a 100+ structure in a language you just learnt (Rust) for a secret GPU the vendor does not share info.
The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel).
This isn't even that new of a thing. The floppy disk drive sold for the Commodore 64 included it's own 6502 CPU, ROM, and RAM. This ran its own disk operating system[1]. Clever programmers would upload their own code to the disk drive to get faster read/writes, pack data more densely on the disk, and even copy protection schemes that could validate the authenticity of a floppy.
And all that engineering resulted in a floppy drive that was slower and more expensive than comparable units for other home computers. I'm not sure if there is a lesson there...
Well, it was slower due to a hardware problem. Basically, the hardware serial device had a bug which required a bit bang comms channel to the disk drive. Doing that amidst the sometimes aggressive video DMA is what caused all the slowdowns.
Back in the day I owned machines that did it both ways, but not a C64. My Atari computer also had a smart disk drive. Worked over something Atari called SIO, which is an early ancestor of modern USB. Back then, the Atari machine was device independent and that turned out to be great engineering!
Today we have Fuji Net devices that basically put Atari and other computers on the Internet, even to the point of being able to write BASIC programs that do meaningful things online.
The C64 approach was not much different, working via RS-232. But for a bug, it would have performed nicely.
Now, my other machine was an Apple ][, and that disk was all software. And it was fast! And being all software meant people did all sorts of crazy stuff on those disk drives ranging from more capacity to crazy copy protection.
But... That machine could do nothing else during disk access.
The Atari and C64 machines could do stuff and access their disks.
Today, that Fuji Net device works via the SIO on the Atari, with the Internet being the N: device! On the Apple, it works via the SmartPort, which worked with disk drives that contained? Wait for it!!
A CPU :)
Seriously, your point is valid. But, it's not really valid in the sense you intended.
Too late for me to edit, but yes I did confuse the source of the bug. Please clarify the C64 drive scenario source of slowness. Was it VIC-20 backward compatibility, or?
In any case, I maintain the engineering wasn't at fault, having a CPU etc. Fastloaders showed it to be just poor software, and that's a point I did not make clear enough.
Commodore wanted new C64 drives to be backward compatible with VIC-20 and vice versa. They failed the second goal, and C64 sold ~10x the number of units VIC-20 did making whole exercise pointless.
All to sell more outdated garbage chips made by MOS instead of using proper FDC controller on CPU bus with cheap standard floppy.
The slowness was due to a hardware bug in the 6522 VIA chip. The shift register (FIFO) would lock up randomly. Since this couldn't be fixed before the floppy drive needed to be shipped, they had the 6502 CPU bit-bang the IEC protocol, which was slower. The hardware design for the 154x floppy drive was fine, and some clever software tricks allow stock hardware to stream data back to the C64 and decode the GCR at the full media rate.
Probably not a fair comparison in some ways but this reminds me of that story of Woz making a disk drive controller with far fewer chips by being clever and thoughtful about it all. I’m probably misremembering this.
You’re talking about the Integrated Woz Machine. It was a custom disk controller that Wozniak created that was used in the Apple ][, /// and I believe on the original Macs. It was cheap, fast and worked.
The 1541 was slow because the c64’s serial bus was slow. Data was clocked over the bus 1 bit at a time. Various fastloaders sped up the data rate by reusing the clock line itself as a data line (2 bits at a time), later HW adapters adder parallel port or even usb to overcome the serial bus bottleneck.
Basically commodore was gonna use an ieee-488 bus for the drive and then decided it was too expensive late in the design and switched to this hacks serial bus that bottlenecked everything.
The 1541 was set to be a highly capable and performant machine, but an interface/design bug held it back and delivered dismal performance whenever connected to the C64. They tried to fix it but it couldn't be rescued, so speed freaks needed to wait for the 1570 series.
It was partially rescued by fastloaders and later JiffyDOS. Fastloaders tended to max out at 10-13x if the disk format was unchanged but if you could reformat or recode the files you could go anywhere from 25x speed to over 40x (transwarp) on stock hardware. DolphinDOS gave a 25x speed up by using the parallel port with 1541.
Epyx games used the Vorpal format which gave 15x load speedup.
The point is, the speed issues weren’t really the 1541’s fault although GCR coding could have benefited from a HW decoder.
Oh I know it’s been a thing forever. Hell, my NeXT Cube with its NeXTDimension display board was such. The NeXTDimension board ran its own entire stripped down OS. It used an Intel i860 and a Mach kernel…. It also was massively underutilized. If NeXT had did a bit more leg work and made the actual Display PS server run entirely on the board it would have been insane. But the 68K still did everything.
Yes, but ... Commodore did this because they had incompetent management. They shipped products (VIC-20, 1540) with hardware defect in one of the chips (6522), chip they manufactured themselves. The kicker is
- C64 shipped with 6526, a fixed version of 6522
- C64 is incompatible with 1540 anyway
They crippled C64 for no reason other than to sell more Commodore manufactured chips inside a pointless box. C128 was similar trick of stuffing C64 with garbage leftover from failed projects and selling computer with 2 CPUs and 2 graphic chips at twice the price. Before slow serial devices they were perfectly capable of making fast and cheaper to manufacture floppies for PET/CBM systems.
In the era of CP/M machines, the terminal likely had a similar CPU and RAM to the computer running the OS too. So you had one CPU managing the text framebuffer and CRT driver, connected to one managing another text framebuffer and application, connected to another one managing the floppy disk servos.
I guess I should have clarified more: I dislike everything running entirely separate OSes that you have no control over at all and are complete black boxes.
The fact they are running entire OSes themselves isn’t that big of a deal. I just hate having no control.
Oh God, the 1541 ran soooo hot, hotter than the C64 itself. I remember using a fan on the drive during marathon Ultima sessions. The 1571 was so much cooler and faster.
There's this great USENIX talk by Timothy Roscoe [1], which is part of the Enzian Team at ETH Zürich.
It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).
I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.
I think making it harder to build an OS by increasing its scope is not going to help people to build Linux alternatives.
As for the components, at least their interfaces are standardized. You can remove memory sticks by manufacturer A and replace them with memory sticks from manufacturer B without problem. Same goes for SATA SSDs or mice or keyboards.
Note that I'm all in favour of creating OSS firmware for devices, that's amazing. But one should not destroy the fundamental boundary between the OS and the firmware that runs the hardware.
Building an OS is hard. There's no way around its complexity. But closing your eyes and pretending everything is a file is a security disaster waiting to happen (actually, happening every day).
And furthermore, OS research is not only about building Linux alternatives. There are a lot of operating systems that have a much narrower focus than full-blown multi-tenant GPOS. So building holistic systems with a narrower focus is a much more achievable goal.
> As for the components, at least their interfaces are standardized
That's not true once you step into SoC land. Components are running walled-garden firmware and binary blobs that are undocumented. There's just no incentive to provide a developer platform if no one gives a shit about holistic OSes in the first place.
> But closing your eyes and pretending everything is a file is a security disaster waiting to happen (actually, happening every day).
How so? I can see the limited access control in Linux is an issue, and for this reason augmented security MAC (Mandatory Access Control) controls exist like SELinux and AppArmor.
But I don't see how the nature of everything being a file is a vulnerability in itself.
If you want to follow the principles of capability security, then a key part of the strategy is to eliminate “ambient authority”, that is, any kind of globally accessible way of obtaining a capability.
In traditional unix-like systems, file descriptors are very close to capabilities. But the global filesystem namespace is a source of ambient authority.
There are a couple of ways to fix this issue: fine-grained per-process namespaces like Plan 9, so that the filesystem’s authority can be attenuated as necessary, so it becomes more like a capability. Or eliminate absolute pathnames from the API, so you have to use functions like openat() to get an fd relative to an existing fd.
It was a lame attempt at humor, a roundabout way of referring to the simplifying assumptions that *nix systems generally make of the underlying machine.
Every cell in your body is running a full blown OS fully capable of doing things that each individual cell has no need for. It sounds like this is a perfectly natural way to go about things.
"Let the unit die and just create new ones every few years" is a brilliant solution to many issues in complex systems. Practically all software created by humans behaves the same way - want a new version of your browser or a new major version of your OS kernel or whatever else - you have to restart them.
"The creatures outside looked from DNA to k8s YAML, and from k8s YAML to DNA, and from DNA to k8s YAML again; but already it was impossible to say which was which."
Death isn’t a solution to maintenance issues, there are some organisms including animals that live many hundreds of years and possibly indefinitely. The reason seems to be to increase the rate of iterations, to keep up the pace of adaptation and evolution.
It's more of a "sleep mode". There are still a lot of wakeups, and cron jobs running clean up of temporary files, cache management and backup routines. Background services still run.
Poor comparison - DNA is compiled assembly language code. It is meant to be spaghetti to save space and reuse proteins for multiple functions. In that regard it’s the most efficient compiler in the universe.
No idea about dinosaurs but some reptilian red blood cells live much longer as in 500+ days for turtles vs 120 for humans. However, it varies widely with mice and chickens having much faster turnover. https://www.sciencedirect.com/science/article/pii/S000649712...
Five species of salamanders have similar enucleated red blood cells, but I can’t find out how long they last in comparison. https://pubmed.ncbi.nlm.nih.gov/18328681/
One theory is it’s an adaption to having unusually large genomes which would otherwise be an issue, but biology is odd so who knows.
I think that's the reason why mammals evolved in this way. Red blood cells go everywhere, and I mean everywhere, in your body. Most other cells can't get close enough.
Isn’t the primary purpose of the ME to run DRM and back door the system? How would it be useful at all open source? People would just turn it off entirely.
This has already been solved. Modern devices come with low power general purpose cores and OSs can wake up briefly to check for new messages. I just can’t see why you would ever want to remote manage your own laptop where some software installed on the OS isn’t sufficient.
Intel recent CPUs come with efficiency cores and support S0 standby, aka "Modern Standby", which can periodically wake up and do stuff like check for new emails.
I don't know. This sounds very computer-sciency-ish. We build smaller tools to help build big things. Now the big things are so good and versatile we can replace our smaller tools with the big things too. With the more powerful tools, we can build even bigger things. It is just compiler bootstrapping happening in hardware world.
The problem is that there's so much unexplored territory in operating system design. "Everything is a file" and the other *nix assumptions are too often just assumed to be normal. So much more is possible.
Possible, but apparently rarely worth the extra effort or complexity to think about.
The Unix ‘everything is a file’ has done well because it works pretty well.
It also isn’t generally a security issue, because it allows application of the natural and well developed things we use for files (ACLs, permissions, etc), without having to come up with some new bespoke idea, with all it’s associated gaps, unimplemented features, etc.
Hell, most people don’t even use posix ACLs, because they don’t need it.
> ITS (of PDP-10 hacker fame) - processes could debug and introspect their child processes. The debugger was always available, basically. The operating system provided support for breakpoints, single-stepping, examining process memory, etc.
> KeyKOS (developed by Tymshare for their commercial computing services in the 1970s) - A capability operating system. If everything in UNIX was a file, then everything in KeyKOS was a memory page, and capabilities (keys) to access those pages.
In every operating system, the basic unit of abstraction will be a process -- which necessitates a scheduler, some form of memory protection, some way for the process to interact with the kernel, and the notions of "kernel space" and "user space". There is a lot of room for innovation there (see ITS), but I suspect most of the room for innovation is in how an OS abstracts/refers to various parts of the system.
This is a bit like asking "Can you elaborate on particle physics" in 1900. The point is that we don't know because there's been so little experimentation in the space. Not a lot of funding for "build an OS that uses completely different idioms and concepts to what we know, possibly without even a goal other than trying something out".
Same. It's not about the principle, but that generally these OSes increase latency etc. There's so much you can do with interrupts, DMA, and targetted code when performance is a priority.
I sometimes wonder about how fast tings could go if we ditch the firmware, and also just bake a kernel / os right into the silicon. Not like all the subsystems which run their own os/kernels, but really just cut every layer, and have nothing in between.
You'd find yourself needing to add more CPUs to account for all the low level handling that is done by various coprocessors for you, eating into your compute budget, especially with high interrupt ratio as you wouldn't have it abstracted and batched in the now missing coprocessors
I'm actually very happy about the rise of VTubers/live avatars. I imagine that there are a lot of people that would love to interactively share their knowledge/skills on youtube/twitch but avoid doing so because they're not conventionally attractive or just too shy.
But what about people that hate the "twitchification" of media? I don't like when youtubers I enjoy watching switch to streaming and then all their content is identical "poggers" chat and donation begging garbage. Streamers all feel the same, regardless of the content. I don't feel there's any value to a hundred instances of a stupid emoji streaming by in a """chat""" window, and everything just feels like attention whoring "pick me" nonsense.
Vinesauce has been streaming since well before twitch, and their content got significantly more "Twitch"-y after they embraced the current system. It's obvious why, because if you play into the chat begging, the surface level """interaction""", then you get more money from the parasocial twelve year olds with mom's credit card.
But I don't want my content full of ten second interruptions as a robot voice reads off the same tired joke somebody paid ten dollars to get read off.
> But what about people that hate the "twitchification" of media?
Well, when those people put months of their lives into reverse-engineering a reliable stack of code for an undocumented platform, and want to do presentations and write-ups of their work, those people can decide to present however they want.
In the meantime maybe people who are contributing precisely nothing can STFU about the people who are.
that's all a choice of the streamer. some streamers choose to have TTS (text to speech) but a lot don't. Twitch is bad about TTS in particular but it's far less common on the youtube side of streaming.
Doubly so the twitch chat is going to mirror the streamer. So if the streamer is being a goon and playing up the twitch culture, you'll get twitch chat. But you can also have a very different chat experience if you establish from the get go that chat is to behave a specific way.
Donations and twitch emerged as funding mechanisms because Youtube ads are not providing enough revenue for the vast majority of content creators out there.
> I imagine that there are a lot of people that would love to interactively share their knowledge/skills on youtube/twitch but avoid doing so because they're not conventionally attractive or just too shy.
Couldn't they just not show themselves on camera at all?
Project idea: Take something that looks like a mindflayer (d&d art with tentacles around it's mouth) and use it for live coding.
Instead of displaying a mouse and cursor have the tentacles reach out and manipulate the code from behind a semi-transparent pane of glass that the code is displayed on.
The quantity of exclamation points lol. I assume I'm just too old to get it...I'm okay with that, and I'm damn impressed with the results, so more power to Lina, whatever works for her.
Yeah, it's just... sorry, but there is nothing in the world so exciting that 149 exclamation points (thanks to another poster for counting) is warranted.
When every statement is exciting and special, then none of them are.
shrugs I'm getting old these days, but to me those exclamation points helped convey the tone the writer was chasing. I read the article, then watched one of the VODs of their streaming, and what do you know: the inner tone I heard while reading the article matched how they talk!
But perhaps I'm biased: I write how I talk too, and use and abuse punctuation to attempt to mimic my own voice.
I'm pretty sure that M1 Linux GPU drivers are indeed that exciting. Being able to ditch the ever-increasing burden of keeping macOS from telling the Cupertino mothership every single little thing I do on my Mac for a proper Linux desktop on the best laptop ever made (I have a maxed out 64gb/8tb M1 max rMBP) is... exciting, to put it mildly.
Usually I get annoyed by that, but in this case I read the whole thing and didn't even notice. It helps that they didn't come in "packages" bigger than 1.
I was just getting ready to say the same thing and was wishing for a plugin that would replace all exclamation points with a period. That would make reading much easier
Would that not mean general excitement on the part of the author?
I find it hard to analyze these things by numbers alone. It's context that really matters and if there truly is a baseline excitement, there really should be a high number of exclamations.
It can in moderation, but there's a phenomenon that people (often women) in communications overuse exclamation points, trying to come across as friendly and not assertive [1][2]. It's something I've noticed in my professional career, and not me trying to make a stereotype. You start to tip that usage too far, like 149 of them in a technical document, and it detracts from your content.
>A bunch of shader cores, which process triangles (vertex data) and pixels (fragment data) by running user-defined programs. These use different custom instruction sets for every GPU!
Why are different instruction sets exciting to the author? What does exclaiming that fact really mean for me as the reader? What I got out of it was the author was so surprised by this basic fact, that it was so out of this world unbelievable, that we also must be surprised by it. I'd rather read technical explanations with leadership/competence.
The intent I got was the idea of all the GPU instruction sets being different is notable, and perhaps unexpected and at the least challenging. The latter being an exclamation in the context of reverse engineering makes some degree of sense to communicate the fundamental difficulty inherent in this effort.
Now, we could go back and forth a few more times, invite friends and curious others to the party and end up with many different takes on this text.
What's the takeaway?
For me, it's lack of judgement. Content is king. I want to know what others mean to say and how they feel about it. I don't want to judge them on those things because what exactly is the point?
Make things better for me?
...better for people like me?
...people in general? Doubt that given the diversity of takes the experiment above would generate.
Waste of time. That's the takeaway.
And to press that idea home somewhat more, when I talk to other people, not text, but talking, I find diversity beautiful. Some are stoic. Some are bitter, jaded well worn solders. Others are excitable. Funny? Yup.
I submit you cost yourself more wondering why they don't conform than ever getting the answer is worth. Further, taking their intent in a more liberal, forgiving sense will get you to the good stuff others intend to share more quickly and efficiently. Plus, you might laugh a little.
I do. And it's enjoyable.
When I read the piece, I enjoyed feeling a bit of the joy, excitement, frustrations experienced by the authors and that amplified the good in all of it a little. Great! They did have one heck of an adventure after all.
Finally, you mention women overusing exclamation points. Maybe they are just being women? Just putting that out there.
I am pretty sure if we were to take a poll of those of us who are women as to whether being one is OK would trend to a definitive yes. The ones I know well tend to have a lot of fun and more colorful, rich relationships too. Kind of envy that personally.
I think it is OK to be a woman. After all, being a guy, I tend to be a guy. Why would women play it any differently?
Thoughts for you to consider.
Cheers! <-- Exclamation intended to convey a general upbeat mood, hoping it's catchy.
I'm not sure what your points are other than "people are different, get used to it." Surely there is room for different writing styles and perspectives in the world. That is really not what I was getting at. If that's what you took away from my comment, I would invite you to re-evaluate it.
>Finally, you mention women overusing exclamation points. Maybe they are just being women?
I did mention it, but I did not come up with this on my own. You can read the study I linked to. Calling it "women being women" sounds a bit like the old "boys will be boys" attitude, as if your gender defines your actions or is an excuse.
Instead I'd rather look at why a large group of people feel like they need to express themselves as upbeat and happy. If you read the study I linked to, it talks about this with explanation. I invite you to read some more.
It's a bit more subtle than that. Understanding others is important. Blanket "there is too much x" or similar kinds of observations leave out the actual person writing and the context.
I read the stuff, and have read other stuff related to this discussion. Have had some conversations with others too. Those conversations are where "Maybe they are just being women" comes from.
Suppose my response was rooted in the exclamation not always being up beat and happy. That's context related.
I just tried watching this with a Pitch Shifter Chrome extension. The voice goes from grating to just ... bad audio, at the lowest possible setting - which is far more tolerable than the original. I may need to go and edit the extension to turn down the pitch even more.
At least in the videos I watched I found the speech pattern (like the talking speed) to match marcans, if you e.g. compare it to his presentation with fail0verflow at C3 for the PS3 hack or some of his streams.
But that is also speculation ofc.
Yeah... I was hoping to passively watch it, maybe gleam something but yeah.
The other thing too, watching other people code is kind of not fun like George or Rene. More fun to see some screeching video with no actual info just the end result of a robot pissing in a cup.
Also goes to show how much work goes into writing code/having some end result. Maybe it will get more exposure since there is a growing trend for vtuber stuff.
The stuff he works on and OP above is beyond my current level/field. I wouldn't really watch it to follow along rather just for the company/background noise. Maybe learning something in the process. Currently I don't watch anyone regularly for code.
I get it though, I used to watch Social Network or Silicon Valley to hype myself up to pursue something. Now my problem is too many projects ha.
I still am trying to figure out my own business though. I'm getting tired of writing code/constant upkeep. Soon there will be a library just to make a button with its own context manager.
Bah I've watched some of Jon Blow's vidéos on Jay, even though he's much more pleasant to watch on video, it's still annoying: a normal blog would be so much more 'efficient'.
> It feels like Rust’s design guides you towards good abstractions and software designs.
> The compiler is very picky, but once code compiles it gives you the confidence that it will work reliably.
> Sometimes I had trouble making the compiler happy with the design I was trying to use, and then I realized the design had fundamental issues!
I experience a similar sentiment all the time when writing Rust code (which for now is admittedly just toy projects). So far it's felt like the compiler gives you just enough freedom to write programs in a "correct" way.
I don't really do unsafe/lower-level coding, so I can't speak to much there however.
The 2015MBP one was the last one that was passable for me, what came after is horrible. Even the new MBP that has real ports again is still not as good as the 2015 in terms of keyboard.
Thinkpad keyboards are great (I own a couple T400’s and used to daily drive a X61s), but the latest MacBook Pros have real, actually good keyboards afaik too.
I've tried them but they're nowhere near Lenovo quality sadly. Unfortunately Lenovo is on a downward trend with ever less travel with each generation so I guess they'll be on par in a few years :(
That is indeed the trend, for basically all manufacturers. For my part, it's not too much of a worry -- I barely use my laptops keyboard directly, thats what my custom mechanical keeb is for, after all. The laptop being thin, light, and powerful is important to me (otherwise I'll just use a desktop). I despise my work laptop (a Metabox/rebranded Clevo 15" thing), as it is huge, thick, and has a terrible keyboard when I'm forced to use it.
These low-travel ones may not be perfect, but they're decent, and that's enough for me personally for the times I do need to use it.
I can’t agree. I have a recent X series, and at this point, it’s approaching butterfly-keyboard levels of unreliability. I’ve always found the travel to be on the spongy side, but this iteration is terrible. Also, my trackpad literally has to warm up before it’s usable. I’ve tried with the track point, but it just doesn’t work for me.
All currently shipping Macbook Airs and Pros have a keyboard that is, as far as I can tell, identical to the great one from 2015 that we love. They switched them all back after the butterfly keyboard fiasco, but hardware pipelines are 2-4 years deep and it took a while.
Not one comment here about the “GPU drivers in Python”. I like the idea of iteration speed, over pure speed.
And the coprocessor called “ASC” also have similarities with Python, where the GPU is doing the heavy lifting, but the ASC (like Python) interact using shared memory.
The same Python is doing with a lot of its libraries (written in C/C++)
> And the coprocessor called “ASC” also have similarities with Python
It's a processor, not a programming language :) The team has essentially strapped the API into something that you can poke with Python instead of with a native driver.
Loved reading this. About the triangle/cube screenshot, they were taken on Linux on a physical Mac OS computer? How were you able to deploy your driver, does the M1 GPU have a basic text/console mode allowing you to start and work with Linux?
Displaying to the screen and stuff was already working, you can already use Asahi Linux and have a GUI and everything, it’s just that it’s all rendered by the CPU right now
I've never played games on my M1 Macbook - what are some popular reasonably graphics intensive games that it would support? Could it run Dota2 for example?
Disco Elysium, Hades and CIV VI run really well on my MBA m1 (using a 4K display). These games are not as resource heavy as Dota2 AFAIK but I’m comparing them to my maxed out 16inch MBP from 2020 which acted more like a cursed semi sentient toaster than a hi spec laptop.
Resident Evil Village recently came out and it performs surprisingly well even on the low end MacBook Air M1 with only 7 GPU cores. What's even more impressive is that the game is playable (low gfx settings, 30fps) when running that machine on low power mode.
It is irksome to me given how much Linux is used inside Apple (board bringup, debugging, etc). You benefit from these gifts, Apple, give back a teensy bit in return. Everybody wins.
I think there's larger barriers to getting windows running on Apple Silicon that would need to be addressed first.
For one example, Windows ARM kernels are pretty tied to the GIC (ARM's reference interrupt controller), but Apple has its own interrupt controller. Normally on ntoskrnl this distinction would simply need hal.dll swapped out, but I've heard from those who've looked into it that the clean separation has broken down a bit and you'd have to binary patch a windows kernel now if you don't have source access.
Apple Silicon doesn't use GIC, but uses AIC (Apple Interrupt Controller).
"Apple designed their own interrupt controller, the Apple Interrupt Controller (AIC), not compatible with either of the major ARM GIC standards. And not only that: the timer interrupts - normally connected to a regular per-CPU interrupt on ARM - are instead routed to the FIQ, an abstruse architectural feature, seen more frequently in the old 32-bit ARM days. Naturally, Linux kernel did not support delivering any interrupts via the FIQ path, so we had to add that."
My next point was about how the NT kernel was designed to handle exactly this kind of change pretty cleanly, but that clean barrier has been allowed to stagnate. So way back, NT kernels for weird systems like SGI workstations that had x86 CPUs but were decidedly not PCs otherwise simply needed hal.dll swapped out with a version written by the hardware vendor to paper over those system differences like how interrupts are routed around and how to access PCI devices. That's no longer really an option however.
It would take more than a GPU Driver, or actually, drivers at all.
Windows has some behavior that is hard-coded and cannot be changed except by Microsoft. This behavior can fairly easily be bent on Linux, but Windows drivers can't bend the rules for some of these issues.
The biggest issue that immediately comes to mind is that the M1 MMU (Memory Management Unit) only can handle 16K pages. Linux can easily be built for 16K, and most programs work on 16K, but Windows currently only supports 4K on ARM. No driver can fix that - only Microsoft. Of course, if Microsoft joins the Apple Silicon train (which Apple has said they are more than welcome to do), then they probably will have official Apple documentation and not need the Asahi Linux findings.
In https://asahilinux.org/2022/03/asahi-linux-alpha-release/ it says that 4k pages work on the HW but 16k has tome advantages. Dropping 4k page size supporrt would imply a departure from arm architecture compatibility compatibility I think, it might have also nontechnical problems (ARM license).
The arm spec allows to have machines that implement any of 4KB, 16KB, or 64KB granules or any combination thereof. Apple shipped plenty of phone SoCs that didn’t support 4KB pages for the CPU (A9->A13).
Interesting - at first this didn't seem to be detailed in the popular (Wikipedia etc) or ARM web pages I could find but it seems it was about terminlogy, "translation granule" was the right one and then you can find mentions like "The granule sizes that a processor supports are IMPLEMENTATION DEFINED".
The GPU driver is an important component for desktop users, but it's just one small part of what you need to bring Linux to these machines. There's a ton of work in all kinds of other areas, down to changes in the core of the kernel. A GPU driver for Windows won't do you any good without all those other drivers.
Depends on the context. The OP, IMO, was being passive-aggressive, almost incredulous.
As I said, every time anything comes up about Asahi, it is guaranteed that at least two threads will occur. This and comment about the vTuber. At this point - 2 years after the project began - it adds little value to the conversation, to the extent that the project lead, who was previously very active here, doesn’t contribute anymore.
>Asahi Lina, our GPU kernel sourceress. Lina joined the team to reverse engineer the M1 GPU kernel interface, and found herself writing the world’s first Rust Linux GPU kernel driver. When she’s not working on the Asahi DRM kernel driver, she sometimes hacks on open source VTuber tooling and infrastructure.
Asahi Linux has been upstreaming, but of course it's ongoing. The GPU driver in particular depends on some rust inside the kernel bits which aren't in the mainline kernel, yet. The 6.1 kernel has some Rust bits, 6.2 will have more, but I don't believe that will be enough for the GPU driver ... yet.
Asahi Lina is a maintainer in Asahi Linux project. She is now much known because of the achivement she earned, programming the Asahi Linux GPU driver for MacOS.
Other way around, it’s a pseudonym created to work on the project, IIRC Lina got involved early-mid 2022 while Hector Martin started the project in late 2020 / early 2021.
They "just joined the project" and reverse engineered and developed the kernel drivers from relatively little prior work, in just a few months.
From afar they also seem to have a pretty ebullient and infectious personality and to stream a lot (we're talking 10h streams every few days: https://www.youtube.com/@AsahiLina/streams) which is impressive work and dedication.
What do you mean by "so much clout"..? She's one of the people who decided to work on the Asahi Linux project, and she's writing and streaming about the work she's doing and participating in mailing lists and IRC communities. Is that an unexpected amount of "clout"?
How am I "trolling"? I'm asking honest questions because I want to know. What happens more often is that every Asahi Linux post gets dragged off topic praising the person's greatness in a cult like manner. That's why I mentioned "fanboyism" in some other post.
I'd prefer if we kept to technical discussion in these posts personally.
> I'd prefer if we kept to technical discussion in these posts personally.
That's really not what your posting history indicates. And bringing up the subject of "fanboyism" is pretty much the exact opposite; that's never going to take the conversation in a productive direction.
Going after "posting history" is already taking things in a non-productive direction. If I disliked vtubers I wouldn't watch them so much. I have a bigger issue with pitch changers as they make listening difficult. Most vtubers don't actually use them.
It’s like a full time filter where the only online identity someone presents is as an anime girl. I think it was popularized by a company called HoloLive that has Japanese girls do this as a full time job? It’s a VERY niche culture thing.
Check out the Asahi Lina channel to see what I mean.
Do all subscribers to team subreddits follow the main sub as well though? Because I assume fans follow "their" teams' subreddit, and there's several in the 100k range.
Yes but none of them are using voice changing that makes it hard to listen to. They all speak in their natural voices or with a natural manual affect that they prefer to put on, something that computers don't seem to be able to replicate yet.
I watch Japanese vtubers quite a lot and enjoy doing so, but I couldn't personally listen to Lina as the voice sounds grating on the ears.
I feel like I am going to regret this post, but anyway...
I am a woman in tech, and the whole thing feels slimy. I found it really creepy and couldn't watch more than a couple of minutes- I couldn't tolerate the voice changer for longer than that. But the whole concept is kinda gross because (as someone else pointed out below) the streamer is most likely an adult man, not a cute asian girl. And the persona is being performed for the benefit of other adult men.
>the streamer is most likely an adult man, not a cute asian girl
I can't see why that makes it "kinda gross"; in fact, playing with identity and having the courage to do so in a whimsical way seems awesome. To have the security in oneself to put on a persona so different to the one we are in meatspace is one of the best inventiont of the 'net in my opinion. The possibilities are freeing.
On the Internet, nobody knows you're a dog.
>And the persona is being performed for the benefit of other adult men.
What possible 'benefit' do they get out of it? Do we know if these "other adult men" leer at the persona? And if so, so what? So long as the person behind the persona isn't be harrassed. It's strange that this sentence is implicitly stating that something done for the benefit of adult men is wrong in itself.
> From what I have gleaned, vTuber is a virtual YouTuber. The streams use an anime model a s behaviour.
Yes, basically instead of streaming as themselves vTubers stream as a persona, usually with an anime-style virtual avatar.
> And for some reason I feel creeped out now. There is something perturbing about the saccharin persona coupled with the pseudonym.
I've not found a vTuber I enjoyed (but the main vTuber community is japanese and japanese TV / entertainment is... a lot, and not my cup of tea).
However how is "Asahi Lina" and an animated cartoon avatar creepier than psychphysic and a random picture or manga snip on a forum 15 or 20 years ago? As far as I'm concerned it's the same principle moved from the media of text and images to that of audio and video.
> However how is "Asahi Lina" and an animated cartoon avatar creepier than psychphysic and a random picture or manga snip on a forum 15 or 20 years ago?
That's a really good question and I only have a cop-out answer.
I'm not the type to get the creeps.
But the immediate thought I had and commented else where was of Pennywise.
Which is really quite alarming!
I think the core was that Asahi in my mind is quite infantilised and the realisation that the person behind the persona is quite likely to be adult man.
Nothing particularly predatory here, but there is still a feeling of a false sense of security.
I think any first reaction is always an entirely fair first reaction. Biases exist. Snap judgements exist. No point in pretending otherwise.
I think like with anything "new", life is always about disconfirming our biases. As a result, now that you're aware of this bias, have the additional time and can access System 2:
Going forward you find a way to have a less aggressive reaction to learning about a the next VTuber.
Unless this stuff goes entirely mainstream I will just avoid it.
This is the first time I've watched this kind of livestream and as well has as making my teeth itch with creepiness, not being an efficient means of conveying the topic it is boring.
I guess I'm older than the average HN user but I prefer reading to watching a video for education.
I understood deadnaming to generally pertain to gender identity. I don't think it's far-fetched to initially consider Asahi Lina's name as a Pseudonym/Alias/Pen-Name, as many creatives (authors, artists, musicians) have been doing for hundreds of years.
If it is a gender identity decision, I still don't view it as malicious for the OP to ask. The context just isn't there in the blog post to make that clear.
Let's not head down this direction of madness please.
I've followed Japanese vtubers for some time and that is CERTAINLY not the case. Vtubers are just aliases for the real person. And each person picks and chooses how much they blend their real lives into that alias.
There are even some vtubers that will have a camera facing on themselves while they stream as a vtuber (for example stream their body, but not their face) or will alternate streams between a vtuber persona and a real live camera or vtubers who stream as a vtuber but the real person behind the vtuber is an open secret (i.e. artists who engage in vtubing but sell artwork at comic conventions attending as a real person). There's a huge range and spectrum of ways people choose to do vtubing.
(Note: A lot of the latter cases are more possible in Japan because of the general social/legal concept there that taking pictures of people without their permission is at least extremely rude and sometimes also illegal if you don't blur their face when publishing it. This is helped by the fact that it's a legal requirement that all devices capable of taking photographs must make a photographing noise when doing so. For example on iPhone in Japan it is impossible to silence the shutter sound effect without modifying the device hardware.)
> This is helped by the fact that it's a legal requirement that all devices capable of taking photographs must make a photographing noise when doing so.
That's not a legal requirement, it's a carrier request.
> For example on iPhone in Japan it is impossible to silence the shutter sound effect without modifying the device hardware.
That now turns off once you take them out of the country though.
Things must have changed in recent years, as it must have absolutely started out as a mandatory legal requirement. From my understanding it dates to some of the earliest digital cameras that didn't have shutter noises built-in and has nothing to do with carriers.
That goes against everything I've ever heard on the subject and I don't recognize this site. Also if it was a carrier reason, wouldn't the sound appear if you were roaming in Japan? It doesn't.
You haven't heard of Buzzfeed Japan? You're welcome to try to find the law though.
> Also if it was a carrier reason, wouldn't the sound appear if you were roaming in Japan? It doesn't.
If it was geofenced you could disable it by putting it in airplane mode, so it's by purchase country instead. (On recent phones it does turn off if you leave Japan though.)
Assuming it is the case I don't think it's polite to share this information. I don't know their motivation for creating a separate public image, but I think we should respect their decision to do so by not connecting them.
> It is imperative that all code contributed to the kernel be legitimately free software.
> For that reason, code from anonymous (or pseudonymous) contributors will not be accepted.
> All contributors are required to “sign off” on their code, stating that the code can be distributed with the kernel under the GPL.
> Code which has not been licensed as free software by its owner, or which risks creating copyright-related problems for the kernel (such as code which derives from reverse-engineering efforts lacking proper safeguards) cannot be contributed.
I'm not sure Apple would want users to run anything but Apple operating systems on their hardware, and the other way around (fighting against hackintoshes back in the day).
I hope to be proven wrong though, as their hardware is really interesting.
They've never taken issue with other OSes running on their hardware. They made Boot Camp for Intel Mac users to run Windows and wrote Windows drivers to use Apple-specific hardware like the iSight cameras. They even showcased this feature in the TV ads. Linux also worked but wasn't explicitly supported.
> They've never taken issue with other OSes running on their hardware.
What on earth are you talking about, iPads, iPhones and all other new devices from Apple outright lockout any kind of 3rd party OS from booting on them.
MacBooks are the only ones grandfathered into old behaviour of actually allowing 3rd party OSes (although they've also lost the ability to boot Windows in latest generation as well).
It's clear the context of the whole thread is Apple's OS X/macOS based lineage of machines. Also, they haven't lost the ability to boot Windows, seemingly due to licensing issues with Qualcomm, Microsoft hasn't offered a Windows port yet. Of course, one couldn't boot directly into Windows from the previous generation of PowerPC based Macs either...
"They've never taken issue with other OSes running on their hardware." is a false statement even within context of this thread. They've thanken issue with other OSes on their hardware.
Is it any different to the false statement that is "(although they've also lost the ability to boot Windows in latest generation as well)"? The last line specifically refers to macOS. No need to be deliberately obtuse.
Apple doesn't really care about individual hackintoshers. Some of their devs have griped about the uselessness of stack traces and logs coming in from hackintoshed machines (bogus errors produced by slight hardware mismatches, drivers developed by amateur community members filling logs with garbage, etc) but they've never gone after anybody who was hackintoshing for personal use. There's even been fairly big YouTubers who've done it without issue, and back when Macs were Intel only and had abysmal thermals/performance a surprising number of Mac/iOS devs were using hackintoshes as their primary dev machines that they submitted to the App Store with.
Where they draw the line is selling hackintoshed machines or any of the tools to facilitate the process. Eliminate financial gain from the equation and you'll probably be fine.
Hector Martin claims so which (frankly) suggests he has some insiders.
However, just because Apple "runs linux" doesn't mean they "run linux" the way you are thinking. It's very easy for corporations to write slapdash, horrific, unmaintainable kernel forks that run on a specific piece of hardware. That's just fine when you are testing hardware before handing it to your OS team, but absolutely unacceptable for upstreaming.
For examples of this, take a look at old Android devices (and their ancient kernels), or the original Correlium port of Linux to Apple Silicon (which happened almost half a year before the Asahi Linux beta - but the code was sheer unmaintainable crap). Upstream it? Heck no - it would be rejected entirely and need almost a total rewrite from scratch. Just because you can write a functional driver doesn't mean it is anywhere close to a good, maintainable driver.
So, in a nutshell... yes, Apple does use Linux for early manufacturing tests. But it would almost certainly not be in a state where we could benefit much from it, and certain features would likely not be implemented. It's not anywhere near as simple as "Apple has done the work already - just upstream it please!"
> Hector Martin claims so which (frankly) suggests he has some insiders.
Eh, I've heard the same thing form people I trust, and I'm not the Apple news version of deep throat or something. It's simply not a very well kept secret.
Although part of me wonders if the code will flow the other way. Now that marcan has put in the elbow grease to upstream concepts like 16KB pages, the non standard ordering for regular MMIO and PCI on Apple Silicon, etc, will Apple embrace those in their custom distro to avoid having so much un-upstreamed code? We'll probably never find out, but it's fun to think about.
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?