Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Next Generation in Graphics, Part 1: Three Dimensions in Software (filfre.net)
134 points by myth_drannon on April 27, 2023 | hide | past | favorite | 71 comments


Historic "next generation", not future prediction, as one might expect from the title. Detailing the evolution of 3D graphics 1984-1996 from Elite up to Quake.


An incredible 12 years of progress. Looking back 12 years at 3D graphics from 2011, and… they’re kind of the same as we have now.


I think there does in fact exist a recent major breakthrough in real time rendering: Virtualized geometry.

https://advances.realtimerendering.com/s2021/Karis_Nanite_SI...

https://youtube.com/watch?v=eviSykqSUUw

It was invented by Brian Karis at Epic. Virtualized geometry allows for unbounded geometric detail while retaining the conventional rasterization approach with polygons and textures. It works like a dynamic LOD, where the game engine dynamically adapts the rendered geometric detail based on visibility and screen resolution. Essentially the maximum rendering cost only depends on resolution and frame rate, not on the detail of the underlying meshes.

However, the solution by Epic, dubbed Nanite, does not yet work for animated meshes. So virtualized geometry is not possible e.g. for game characters. Brian Karis says this is a solvable problem though.


Are you referring to progress in rendering techniques or in the final visual result? At least in terms of the latter, games from 2011 look much cruder to me compared to games released today or even a few years ago.


Crysis with 2011 era mods looks about as good as a large chunk of modern AAA games, and even better in some cases such as compared to FF Strangers of Paradise.


Both real time ray tracing introduced in 2018, and real-time denoising with super-sampling / up-scaling / frame-interpolation, both of these have made meaningful visual quality improvements in the last 12 years. If you do a Google image search for “best games with ray tracing” and compare to “best looking games from 2011”, there is an obvious and sizeable gap in quality.


It’s still nothing like what we saw from say 2000-2010. I’m not sure exactly where the diminishing returns are given how much more powerful hardware is relative to a decade ago, been graphics development really has been disappointing. Maybe the explanation has a lot to do with training programmers on certain platforms that become outdated, or games with long development cycles being unable to take advantage of the tech that actually exists when they release.


I think games and graphics have some pretty inherent n^2 scaling which can make progress appear to have slowed down, despite large increases in computational performance.

Say your textures are 100x100px for example. If you double the size of your texture to 200x200px suddenly you have 4x as many pixels to deal with.


Take a look at Unrecord, for the difference from 2010, photo realism has come a long way

https://www.polygon.com/23691482/unrecord-bodycam-fps-steam-...


This is mostly due to the last console generation getting dragged out for so long. The PS4 came out in 2013, but thanks to a small intermediate upgrade and extended supply chain issues for the follow-up generation, by late 2022 you still had absolute high end games being made with decade old hardware in mind.


As with most things in the beginning a lot of low hanging fruit allow for rapid innovation and progress. As things become more developed, progress tends to slow. At that time both hardware and software made rapid progress.

I think currently the big change that is happening is path based ray tracing. In the next couple years a lot of mainstream hardware should be able to do it and it makes a big impact on how games look.

Apart from that it's just more triangles I guess. We are approaching photorealistic looks and further development will try to run what huge desktop graphics cards can do, on smartphone GPUs. There we have seen actually significant developments in the past 10 years.


Yes, turns out the things that greatly improves graphics rendering within non-exponential algorithmic where discovered quite fast. Either someone comes with a unconventional breakthrough, or we're stuck with heuristics optimizations and FSR/DSSL.


But what would you think will be the next big step after some of the current hyper-realistic visuals made possible by modern game engines?


The recent Unrecord demo [0] looks amazingly realistic at parts, but it shows what I think we're still lacking, realistic non-mo-capped human visuals. To my eyes anyway, there's still something very fake about the way characters move their bodies, their eye saccades, their mouths, the way they breathe and tumble, at least in terms of what can be achieved by realtime game physics. Certainly given enough render time, studios can pretty convincingly fool the eye even now.

[0] https://www.youtube.com/watch?v=5qvVNzsJyB0


AI procedurally generated content. AI npc's. And even AI supported render engines. Maybe at some point complex shader logic can instead be generated by some AI.


Anything related to Collision Detection and physics at large. That part of graphics is still at very early stages in realtime context.


One reason is that up to quake almost every game had its own engine, hard coded by experts in pure graphics programming.

After halflife and unreal the game engines became standardised (later of course came unity etc.) and the focus of selling games turned to graphic style and storyline (and often multiplayer gameplay) over the pure mechanics.


Thank you. I showed up ready to argue that graphics plateau’d some time ago, and that unless you intend to use ML to make it look more real, it’s only going to look more detailed.


Nah, it's lighting because that's the most important thing. When the hardware is ready (at least 10 years).

Games have been adding more detailed textures and more geometry because that's all they could do. Textures just require RAM


Lightning and shaders.


> and managed to do it in real time, even on a fairly plebeian consumer-grade computer. He did so first of all by being a genius programmer, able to squeeze every last drop out of the limited hardware at his disposal.

He did it in the '90s. Now we have Javascript frameworks, SOLID principles, clean coding, uncle Bob and AGILE methodology.


Wow, easy now. You're throwing a bunch of stuff together and acting like it's all the same evil.

Trust me, the way they wrote games back in the day was the most AGILE methodology you'll ever see.

Clean coding is such a loaded term, too. It can mean different things to different people, but I presume you're throwing shade at the book "clean code".

Finally, Javascript frameworks, well, there is no redeeming there, and to be frank, it started at Javascript.


Anyone remember the SNES port of Doom? By all accounts, it was a piece of sh!t compared to the PC version, but talk about wringing out what you could of the limited SNES hardware


SNES doom was a total labor of love by Randy Linden, the story behind it is very interesting: https://www.shacknews.com/article/117004/super-doom-how-id-s... He created it without even getting the rights to do so first and id was so impressed they had to publish it. It's arguably an even bigger programming achievement than regular Doom as the SNES has no business doing anything remotely close to Doom. A lot of the work to make it run on the SNES (simplifying some levels and such) went on to almost all the other console ports too. Yeah it wasn't the best Doom experience, but it was still pretty amazing for what was basically a souped up 6502 processor on the SNES.


SNES DOOM does have the Super FX chip, which draws textured polygons to an on-cart framebuffer for DMA transfer to main video memory. It runs at 21 MHz, much faster than the SNES console itself and actually not all that far off from the Playstation or a 486 PC.


That's great but doom doesn't use polygons and is a texture mapped raycasting engine. Super FX does help speed up some vector math operations but it's still quite limited.

If you played SNES doom you would know it's really pushing the system to the absolute limits. The frame rate is pretty low and cropped play area is small. They had to simplify a bit of level geometry to get the BSPs and other level traversal logic for stuff like enemy AI to fit in memory (which was 128 kilobytes!). They didn't even have enough CPU power to texture the floor and ceiling.


The SNES port was my first introduction to Doom. What a blast from the past.


Why do you censor yourself?


> Now we have Javascript frameworks, SOLID principles, clean coding, uncle Bob and AGILE methodology.

I see this sentiment here often, but I still fail to see what is inherintly bad in any of these?


Probably not bad, probably less fun though.


And we're all the worse for it.


> Now we have Javascript frameworks, SOLID principles, clean coding, uncle Bob and AGILE methodology.

Oh no, you’ve uncovered his secret plan!

https://www.youtube.com/watch?v=q_MkLfD0Ly4


> In a misguided attempt to fix the bad vibes, Carmack, whose understanding of human nature was as shallow as his understanding of computer graphics was deep, announced one day that he had ordered a construction crew in to knock down all of the walls, so that everybody could work together from a single “war room.” One for all and all for one, and all that. The offices of the most profitable games studio in the world were transformed into a dystopian setting perfect for a DOOM clone, as described by a wide-eyed reporter from Wired magazine who came for a visit: “a maze of drywall and plastic sheeting, with plaster dust everywhere, loose acoustic tiles, and cables dangling from the ceiling.”

It sounds like Carmack really took the dispute personally, and was making some unwise choices.


I remember reading about Romero playing multiplayer Doom whole day instead of working on Quake. Carmack could have been tired of his shit. No walls no way to hide, everyone can see everyone else working/slacking off.


I was 13 when I downloaded QTest, the ID Software tech demo, from a local BBS. My brother and I shared the computer- a 486 dx2-50, in our 'bedroom'- an unfinished basement. The antique desk our only furniture, a bare lightbulb hanging from the beams. Our mattresses sat on bare concrete, surrounded by cardboard boxes. My digital metabolism and BBS habit was pretty amped up by that point, but it's so hard to describe the feeling of running QTest. Doom was exhilerating... Quake was profound. Suddenly decades of future progress had crystalised.

Bless Carmack for including features such as scaling render output independently from screen resolution (+/- buttons by default?). It allowed us to share this vision of the future with our not-quite adequate DX2-50.


I was about 15 years old when Quake came out.

I had read an article in a magazine which said it was the first truly 3D game which I (naively) disregarded as hype as I had played loads of Duke Nukem 3D which seemed as to me to also meet the definition of 3D.

But I still vividly remember playing it for the first time and using mouselook to actually look around, only then did I actually understand it was mind blowingly more 3D than Duke or DOOM, especially when I saw a monster for the first time.

Another interesting fact which young gamers may not know: Mouselook wasn’t an option which you could turn on or off in the game settings, at most you could bind it to a key, when you let go of the key mouselook stopped.

It seemed to me it was conceived as an opt in keybindable option to merely “show off” the true 3D nature of the game. A lot of people just played the game with keyboard only and there was also joystick support.

However, it soon became common knowledge that you could put “mouselook+” in “autoexec.cfg” (if I recall correctly) and it would keep mouselook enabled permanently. Once you became practiced enough at playing with the mouse and keyboard in combination with each other you could run rings around slow keyboard turners in multiplayer who didn’t stand a chance.


Its interesting that while 3D in crude forms captured gamers imagination early on, it never really caught on in non-game contexts except where unavoidable (CAD etc).

We like to say that games are a metaphor for life, but that might not be quite true. It seems that for most uses, the accuracy of 2D trumps the realism illusion of 3D


The Mac used to have a cube-turn animation for switching between screen-spaces. It was a-larming, like your computer was breaking down in a spectacular Hollywood fashion.


aren't you talking about AR/VR when you say 3D? Then, the main hurdle until recently was the hardware constraints rather than the accuracy or realism stuff.


It’s in the comments on the original post, but Battlezone, Star Wars and other arcade games had 3D vector graphics well before Elite. There’s also stuff like Zaxxon, Q*bert or Marble Madness which simulated 3D with isometric 2D.


I don't think the article claimed Elite was first. Funny enough, I just spent the last three days implementing a remake of Battlezone on rp2040 with just fixed point math. That was a lot of fun. Still a few tweaks left to do, but it's mostly done. Edit: Battlezone's 3D is very limited, the camera, and object rotations are limited to only rotations about the y-axis (vertical axis). i.e. only yaw, no pitch, no roll. Given the hardware involved, this is understandable.


Everything prior to VR was merely a simulation of 3D. Isometric projection is no worse or better than 1 pt perspective. Sure some of it was hard coded and didn't generalize to a full environment and physics model, but so what if they didn't build more than the set pieces that would be on screen.


If you want to get pedantic then VR only simulates 3D as well since it is just two 2D images used to give the illusion of depth via stereoscopy (https://en.m.wikipedia.org/wiki/Stereoscopy).

However in terms of gaming, 3D has different meaning from the physical world. And in terms of gaming, the vector games defined before are classed as 3D


You're kind of making my point for me. A video game is by definition a simulation. If we start gate keeping what counts as "true 3d" the definition you have in mind will always be debatable. Personally I put the line at it being possible for things to be in front or behind other things (rendering appropriately, and relevant to the game mechanics). So this excludes games that were 2d game play with hard coded 3d assets, but does include "2.5d" games.

Basically I'm putting the line at actually needing to keep track of the Z coordinate for the game to work correctly. Isometric should count (provided one object sometimes obscures another otherwise in view). I don't know what else 3d could mean.


> You're kind of making my point for me.

No I’m not. Your point was that 3D games weren’t 3D. I was saying you’re argument isn’t honest to how the term “3D” is intended in the gaming industry and cited other examples of how you’ve misunderstood that term.

> Personally I put the line at it being possible for things to be in front or behind other things (rendering appropriately, and relevant to the game mechanics). So this excludes games that were 2d game play with hard coded 3d assets, but does include "2.5d" games.

The point of 2.5D is that the game has 3D-like visuals but the gameplay actually only happens in 2 axis. It’s to make the distinction you’re trying to express except does so in a much more elegant way.

And 3D rendered assets in a 2D game is still 2D (eg Donkey Kong Country on the SNES)


>No I’m not. Your point was that 3D games weren’t 3D.

yes you are, and now you're insisting what I wrote is what you interpreted it to be. You really want to argue that you know what I said better than I do?

>The point of 2.5D is that the game has 3D-like visuals but the gameplay actually only happens in 2 axis.

I stated very clearly I'm only counting it if the 3rd positional variable is actually required to implement the game. There's a very subtle distinction on if this definition counts a given 2.5d game: if something is in mid air is it possible to pass under it? Will a falling object land on something trying to pass under it (or will it screw up and clip through instead)? If the answer is no to either, then its not 3d. Its just drawn like its 3d but is still 2d (such as your DK example). If the answer is yes to both, then its 3d.

There are games, that are not examples of 1st person AAA modern 3d engines, often rendered in isometric or some other projection, which none-the-less pass this basic criteria of being 3d. If gameplay in which the third dimension is literally a relevant variable is still just a "simulation of 3d" and not "real 3d", then no other method of projecting a simulated 3d world to a 2d screen should count either. That excludes pretty much everything prior to VR (and and as you pedantically argue, maybe also VR).


It's not just two 2D images since there is a lens that emulates depth. On current headsets it's a static depth, but in the future it could be made dynamic by adjusting the optics on the fly based off the depth of what you are looking at. Outside of VR there are also light field displays which take more that just 2 images and output light such that multiple observers can see the name object, but from different angles.


> It's not just two 2D images since there is a lens that emulates depth.

It doesn't emulate depth. The stereoscopic effect does that. It helps with focus (but this is a moot point because even if I did emulate depth it's still an emulation so my point stands).

> Outside of VR there are also light field displays which take more that just 2 images and output light such that multiple observers can see the name object, but from different angles.

Sure but they've been talking about this for literal decades and there isn't any consumer applications available yet. I think we're still decades off having that.

My point wasn't that real 3D isn't possible though, clearly it is. It's just that the GP's point about 3D games not being "3D" is disingenuous to how "3D" is usually termed in video games.


Prove you're not playing a very immersive VR simulation right now.


Im not going to engage in idle speculation because you’re arguing a very different point from your original “everything before VR isn’t 3D” comment.


Tune into tomorrow for our next exciting episode of PC-platform video gaming history.

The useful lesson there is that responsiveness will make up for many visual problems.

Must-get-framerate-up.


>“Mathematics,” wrote the historian of science Carl Benjamin Boyer many years ago, “is as much an aspect of culture as it is a collection of algorithms.”

I have the opinion that Mathematics is a set of axioms, theorems based on those axioms and conjectures.

Yes, you follow some algorithms to prove something or to do some calculations, but those algorithms, no matter how interesting, are less important than the results.

To me, algorithms belong more to applied Mathematics than pure science, which Math is.


I am the only one here who gave up reading this? There's something up with the writing that I can't put my finger on?


This has been a problem people have with every filfre post lately.

The Digital Antiquarian is a blog but realistically it's actually a very long book. The first entry was published in 2011. It treats the entire history of interactive gaming.

Dropping in on chapter 217, or whatever this is, is a little confusing. Maher has spent a decade establishing a detailed context for discussing the nature of interactive experiences.

Part of the story is about the sad years from about 1995-2002 when interactivity was decreasing even as rendering technology improved. Maher is in the middle of that sad story right now and people are put off by the matter-of-fact way he discusses the flaws of these iconic games and by the pace of his narrative.

It is such a sad story that people think Maher "dislikes" or "mocks" these games. He does not. It was a real nadir for the industry.

Interactivity picked up again sometime around the release of Morrowind.

I love the blog. I would strongly recommend starting at the beginning and seeing if you don't love it a few entries in.


It's not that, it's the prose is somehow awkward. Well that's the best way I can describe it. I'll try it again. Interesting topic.


I think I dislike the assumption that more-real game equals better game. But I am not sure that is coming from the author or from the portrait he paints of an industry that seems almost wholly obsessed with this one end-goal.


Nope. To me it felt like a mess of the most overrated bullshit that mattered only to people who where on the outside of this movement.


The player movement in Quake (and the Quake derived engines like Source) still feels the "right one" for me.

Not sure if I can put it to words, but to this day, moving around in any other engine based FPS just "doesn't feels right".

Or maybe it's just nostalgia...


Let me make the case that it's definitely not just nostalgia - or, at least, that Quake style movement didn't fade away because of any objective, rational process.

One of the exasperating consequences of the rise of game engines has been that you have games shipping that have more and more of their game design inherited from their game engines.

On some level this makes sense - games are just massively complicated, and so if you already have working, tested code, it's often quite tempting to just go with what already works (and not spend time really internalizing how working things work) and then focus on figuring out the particular things you are adding from that baseline.

As just such an example, when I was working on Activision's Soldier of Fortune, we largely inherited Quake 2's movement code, and most of it was left untouched by the time we shipped. At some point, midway through development, inspired by Thief, I stayed late one night with a co-worker, and I added in leaning around corners to the player controls. I don't remember the particulars of that process, but (obviously) I had to make tons of aesthetic choices while doing that, because I was writing it from scratch. But the base movement we could inherit.

If you go back and look at first person games from the late 90's, their aesthetic choices about basic player movement are all different in subtle ways. That makes sense, because most studios were writing their own bespoke engines at the time, and there was vastly less code sharing. So people were writing code because there was no particular alternative, and so they were making tons and tons of aesthetics choices whether they wanted to or not. Lots of those choices weren't always great, but they were often particular.

It's clear at this point that lots and lots of FPS games just inherit Unreal Engines movement. Not because it's great, but because it comes with Unreal and it's a default. To me, there's something very specific about the way friction works with player movement in Unreal that feels very ... sticky? ... compared to Quake Engine games. Players come to a stop when input is released in a way that feels like being in glue - again, at least to me. There's more subtle gliding around in Quake. As far as I can tell, the difference rarely effects game play in most games. But it does change how it feels aesthetically to play moment by moment.

Anyway, this topic feels intensely path dependent to me. Unreal's movement is the default because Unreal is the default engine (in a lot of contexts), but it didn't become the default because of anything specific about its player movement code. Those aesthetic decisions were just along for the ride, so to speak. Or at least, that's my sense.

Interestingly, I found a github project a while ago that tried to reverse engineer Quake / Source's movement and put it into Unreal Engine 4. No idea how successful the project is, but I suspect it might be an interesting resource for seeing what's different between the two: https://github.com/ProjectBorealis/PBCharacterMovement


There are still people out there (such as myself) and very tiny amount of game studios who still care about these small details and will often rewrite movement from scratch in their respective engines. To me, fluidity of player movement should be very high on the priority of "features".

You are quite right though about devs just leaving UE movement...which is a huge shame because it's a such an important detail in a game; how the game feels. I don't blame devs who don't do this though because it's incredibly complex to write your own especially around what's already there (10x so if you're doing multiplayer); I've also come across whole studios while contracting who don't understand that having a game that looks good but feels bad can impact their sales/reviews.


> To me, there's something very specific about the way friction works with player movement in Unreal that feels very ... sticky? ... compared to Quake Engine games. Players come to a stop when input is released in a way that feels like being in glue - again, at least to me. There's more subtle gliding around in Quake.

UE movement is not physically correct. So it would make sense that it generates a nagging sense of uncanniness, especially when the rest of the game looks close to realistic.


> It is still remembered with warm nostalgia today by countless middle-aged men who would never want their own children to play a game like this.

What ? Why wouldn't you let your kid play Duke Nukem 3D ? It's an awesome game?


That depends so radically on the child.

What their personality/psychology is like and what they've been exposed to already.

I wouldn't want DN3D to be my son's first exposure to the idea of female strippers. (What should it be? Now that TV channels no longer exists and you just don't catch inappropriate imagery by accident... I don't know! Something that explores the idea that he's starting to enjoy naked ladies, which is good bu under societal strictures; strippers are sometimes exploited so it's ethically fraught; OTOH men can be def exploited through their sex drive, the many traps. Also look that's a fake boob. Not some guy with a gun, godamnit.)


> Now that TV channels no longer exists and you just don’t catch inappropriate imagery by accident...

What? Hehe, on the internet today kids are way way WAY more likely to see inappropriate imagery than when we had TV channels. My kids seem to have turned out okay, but we had multiple accidents where they search for something and got back stuff that was so much worse than what they asked for. They get a firehose of inappropriate content on TikTok and YouTube. I would kill to have the internet be as moderated and tame as TV was.


I grew up playing God of War saga with my 3y old bro and other games like MK9 and so on. We just laugh at the fact.

Good memories. We enjoyed both R and kid games.


It also has crude adolescent humour which humourless old men (like me) may deem inappropriate for children.


It's crude, surely, but it's mostly lost on the audience that's too young to understand it, and if they understand it, then they're obviously not too young?

Also, what's the problem with crude adolescent humor ?


While shopping in a mall with my dad, in 1997, I found Duke Nukem 3D in a bargain bin. I think it was something like $20, which was a steal back then - IIRC most games, especially consoles, retailed for three times that.

Anyway, I was under 18, and I begged - but no dice, he saw the PEGI 18 symbol and wouldn't budge. Not one bit. Instead he saw Civ II in the same bin, and thought that would be a good game for me. And it was!


You definitely got the better deal :)


I appreciate the attempt of the author to record the history of 3D games (this is how I took it).

With that being said, from the title, I thought I was going to be reading a history of 3D graphics software. Instead it felt like I was reading a story about the 3d game industry, at least at the end... So, might want to think about that. It didn't feel like a cohesive whole, to me.

There seems also to be a lot of personal bias in this piece toward commercialism of software..

> We can learn much about the tech zeitgeist from those algorithms the conventional wisdom thinks are most valuable. At the very beginning of the 1990s, when “multimedia” was the buzzword of the age and the future of games was believed to lie with “interactive movies” made out of video clips of real actors, the race was on to develop video codecs: libraries of code able to digitize footage from the analog world and compress it to a fraction of its natural size, thereby making it possible to fit a reasonable quantity of it on CDs and hard drives. This was a period when Apple’s QuickTime was regarded as a killer app in itself, when Philips’s ill-fated CD-i console could be delayed for years by the lack of a way to get video to its screen quickly and attractively.

I'm sorry, in my experience, Quick Time was a mainstream mess to be avoided it like the plague. There were way better solutions out at the time, that were not so well known.

> Predictably enough, it all turned into a bit of a fiasco. Crackers quickly reverse-engineered the algorithms used for generating the unlocking codes, which were markedly less sophisticated than the ones used to generate the 3D graphics on the disc. As a result, hundreds of thousands of people were able to get the entirety of the most hotly anticipated game of the year for $10. Meanwhile even many of those unwilling or unable to crack their shareware copies decided that eight levels was enough for them, especially given that the unregistered version could be used for multiplayer deathmatches.

> Carmack’s misplaced idealism cost id and GT Interactive millions, poisoning relations between them; the two companies soon parted ways.

Misplaced?, please... I feel like this really biased toward assuming games, and the people that make them, need to be commercially viable to be any good, its almost like this piece is coming from some PR guy.. People reversed-engineered games because that was their contribution to the technological movement of the time, it was, and is, just as important as making software.

Also, I don't think it had anything to do with poison relationships. Its more of a difference of perspective: There were people in the software industry only concerned about making money off of developing software, and there were people that were interested in changing the world from an idealistic perspective. Looking at where we are at now, I'd say the poisoning is done by the people who's only interest in software development is making money.


>>this was a period when Apple’s QuickTime was regarded as a killer app

>Quick Time was a mainstream mess to be avoided it like the plague

There was NOTHING like QuickTime before QuickTime. It enabled 1991 Adobe Premiere build by ex Quicktime engineer on Mac platform, 1991 Avid ported from Apollo $workstations$ to Mac thanks to QT. Microsoft and Intel got so scared of it they paid off third party Apple contractor to steal code. Microsoft shipped stolen code as part of Video for Windows update. https://en.wikipedia.org/wiki/San_Francisco_Canyon_Company https://www.theregister.com/1998/10/29/microsoft_paid_apple_...

"the [QuickTime] patent dispute was resolved with cross-licence and significant payment to Apple." The payment was $150 million."

"Intel gave this code to Microsoft as part of a joint development program called Display Control Interface."

"Canyon admitted that it had copied to Intel code developed for and assigned to Apple. In September 1994, Apple's software was distributed by Microsoft in its developer kits, and in Microsoft's Video for Windows version 1.1d."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: