Hacker News new | past | comments | ask | show | jobs | submit login
Bash on Ubuntu on Windows (microsoft.com)
420 points by aymenim on April 7, 2016 | hide | past | favorite | 422 comments



I am a little scared from the distinction we are start to make between "computers" and "developers' computers"

In most computer nowadays you cannot code (tables and smartphones), are computers doomed to be an expensive tool for few "nerd" ? What will be the impact on computer literacy ?


It's not recent; this developer-user split has been growing for a few years now:

http://boingboing.net/2012/08/23/civilwar.html

http://boingboing.net/2012/01/10/lockdown.html

...and RMS predicted this almost 20 years ago:

http://www.gnu.org/philosophy/right-to-read.en.html

I think the rise of P2P, file sharing, and the openness of the Internet in the last decade significantly narrowed the developer-user gap; and it's been growing since then, motivated by corporations' desire to maintain control over their users.


> motivated by corporations' desire to maintain control over their users.

I think that's only one factor, and not a majority one.

Most users don't want to have to deal with "how it works". They want a simple, easy to use tool that works reliably... And they want to call someone to "fix it" when it "breaks". That's how it works with plumbing, cars, landline phones, stereo components, televisions, and all the electronics they've ever used.

The exceptions are computers and some smartphones, which can present cryptic error messages, have weird things in their settings, and generally make a "dumb user" feel out of their element. Think about the confusion users feel when confronted with a funny noise in their car. "I'm not a mechanic, what does that noise mean?" is no different from "I'm not a computer person, what does that error mean?" What's more, the meaning of the question is not "what, mechanically/electrically, is at fault?" It is "how much time/money will it cost to get it fixed?"

It's not just a small preference, either - the height of luxury are "push button" services that "just work". Go to a high end hotel, and your room phone has just one button. Top end consumer products of all sorts struggle to be an easy-to-use "appliance". A dumbed down user interface without developer tools is user preference, it's status, it's customer comfort and pride, all tied into one.

So 99% of companies end up designing their interfaces like that hotel phone: http://salestores.com/stores/images/images_747/IPN330091.jpg

IMO the most impressive thing about OSX is how well it supports both audiences: it feels like a push-button, high luxury, comfortable, easy device to my mother. But under the hood there are great logs and a solid BSD-based operating system model. It comes prepackaged with a lot of developer tools, hidden in a place where I would look right away, but my mother would never notice.

Sure, some companies use software to limit and control their customers (cough cough Sony), usually with sharp legal/lobbyist teeth to enforce that control. But 99% of companies out there just want to make their users feel comfortable, high status, and competent to use their device.

While I agree with RMS that this split is inevitable, I don't believe it's about control. It's about two distinct market segments: auto enthusiasts who want control over the torque settings in their high end car, and people who just want a car that fucking works. Chefs who want sector-by-sector control over their oven's heating profile, and people who just want to be able to cook a fucking roast without burning it.


The problem with push button service is that you need people interacting at the backend to make the magic happen. Those people cost money. If you try to provide apple-level simplicity at google-level prices you won't be able to afford those people. Google itself is a fine example. It works well, until it doesn't, and then you're stuck. Providing a power user interface on a consumer level product is imho a necessity if you leave problems up to the user to solve. At least it gives them a chance of getting unstuck.


And this is why people will pay Apple prices for Apple gear. If you provide a "power user interface" it's a sure sign you're trying to save $$$ by skimping on support and you don't care about the user. This goes even for professional tools; cf. Autodesk Inventor vs. AutoCAD; Visual Studio vs. editor and command line tools.


It's more broad than that. The distinction lies in production versus consumption devices.

Tablets and phones are consumption. You can't do any serious work on them - development included.

This is why laptops and computers have stuck around in spite of the proliferation of cheap, tiny, elegant consumption devices.

So no, I don't think laptops and computers will go away for non-nerds, just for people who don't produce anything.


I would put Instagram or SnapChat firmly into the production column. While many do not like the "output" of this production, that does not change the fact.

And a lot of music creation apps exist for tablets/phones.

This production/consumption divide is too rigid.


I think we need a better set of words for this, but I'd still put Instagram and SnapChat on the consumption side - "producing" a photo and maybe applying a sticker and a filter isn't really production. Those features are designed to spruce up the photo that is fed directly into the consumption loop without much context or sense to it.

Now don't get me wrong - while I only use the two a little, I think they're fine. It's communication, an important part of human experience. But, at least in my mind, Instagram and Snapchat fall firmly into the same group as browsing Facebook or 9gag, as opposed to e.g. making a let's play video or a comic strip.


For now, it is true that most "professional" productivity apps are on laptops / desktops.

Yes, there are ways to take photos and create music on tablets and phones. You can do some basic editing on them, even. But the "professional" tools for photography and music, with all the bells and whistles you can think of, are still dominated by laptop / desktop computer programs. (The dominant programs being Photoshop for images, and DAWs like Logic, Ableton Live, and Pro Tools for music.)

The distinction between "production" and "consumption" devices is indeed kind of too rigid in a sense that, of course, professionals will utilize the creative tools that come on tablets and phones, even if the desktop / laptop programs are the primary tool. Tablets also can shine as an extended interface for desktop programs. (EG: Logic Pro (and others) have apps that turns an iPad into a remote controller for the main DAW. There are programs like Astropad that turn your iPad into a Wacom-like tablet for Photoshop, etc.)

The obstacle is interface. The fine-tuned control of a tablet or (especially) a phone is much poorer than using a mouse and keyboard with a large screen. Until that gets resolved, I doubt desktops / laptops will go anywhere.


You can create content on tablets, and some of it is excellent content.

Development isn't done on tablets because the input devices we have to make code are limited to a keyboard, and most people think text files are code, rather than a serialisation/deserialisation format for an AST.

You could easily build an AST with gestures and speech rather than tapping buttons, and I think in 10-20 years time that's how we'll make software.


You could easily build an AST with gestures and speech rather than tapping buttons, and I think in 10-20 years time that's how we'll make software.

I doubt it. Perhaps we'll be making ASTs by writing (i.e. drawing symbols with styli or pens), but I don't think we'll be doing it via gestures and speech. There's a reason that we don't teach math via interpretive dance.


There's also a reason people in the same room don't communicate by tapping buttons with letters on them.


Like 'Zecc said, maybe they don't care about the third parties present. Or maybe they want everyone to be a part of the conversation, which is often fine. Or maybe they talk about emotional matters.

But text is a pretty fine form of communication and I find myself using it very often at work (and at home I often talk this way to people not in the same room, but in the same flat). It's fast, it's convenient, it's less disrupting, and the only reason to avoid it are some silly preconceptions that digital communiction is somehow "worse" than spoken words.

Also, you never passed papers to your friends while in school? That's pre-smartphone equivalent of IM.


We do that a lot; for technical stuff it is often many times more efficient than talking.

Edit: And easier to search, remember, read again and the less nice variant of that; 'you never said that to me' 'I did: ' copy/paste.


I can't say I've found any scenarios where talking to someone is many times less efficient than typing them a message. There are certainly times where it's helpful to supplement conversation with code, but that's a different story.

Regarding being easier to search and read it again, it seems like there are potential technical solutions to that problem, but I would agree that we're not there yet.


Obviously people occasionally do talk in person via pressing buttons. That doesn't mean it's generally preferred.

Recorded speech is also searchable so not sure that's relevant.


> Recorded speech is also searchable so not sure that's relevant.

It is ; recorded speech is not very searchable, especially if you are talking in a group in a conference where people can be from different countries with different dialects (which is the normal situation for our group talks). Also it is not convenient and sometimes not possible to record every (conference) meeting (too much noise etc). With text it's automatically recorded and perfectly searchable...

Also some of my colleagues are not good at English listening but are very good technically; if I type what I mean they understand while if I/we tell them, everything has to be translated and/or repeated many times.

I think the tech is not there yet to say it's not relevant.


Totally agreed the tech is not there yet. I just also believe at the current rate machine learning is going it the issues will soon be much less relevant.


I hope so but that has been a long standing promise. For me no amount of voice to text beats typing. I do not know why but things like Dragon basically give me gibberish. And that is with natural language. With natural language, code and math it is just vomit. I have no clue how they will solve that soon and then mix.it with translation as well. Hope there is though.


I do that all the time at work. Coworker (who's sitting next to me) has headphones on? Message him on hip chat.


They don't care about other people in the same room trying to concentrate on what they're doing?


Go ask teens with a smartphone at a restaurant but I've been doing the same myself sometimes. 1-1 or group chats.

A more specialized scenario: I was copy/pasting stuff to a colleague in the same room yesterday.


They don't?


Generally, no they don't.


Plain text files are an incredibly powerful way of "storing ASTs", the advantages are far too numerous to list. The primary one being complete and total interoparability with all other tools that accept plain text files.

I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.


Not saying we shouldn't store software as text. Just saying we don't need to make software with text.


Get a better editor - one that lets you operate on semantic units. And/or get a better programming language - one that lets you operate on code as AST.


I think you're making good points, but please let me know when semantic editors are available for Go, Rust, JavaScript and Python.

One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.


> I think you're making good points, but please let me know when semantic editors are available for Go, Rust, JavaScript and Python.

I think there are ports of paredit-like features to those languages in Emacs too, and all the other semantic features of Emacs itself work with those. As long as the language's major mode properly defines what is e.g. a function, a symbol, etc. you can use semantic navigation and editing.

> One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.

I don't think this is something that AST gives you. AST is just a more machine-friendly representation of what you typed in the source code. Portability between different platforms depend on what bytecode/machine code gets generated from that AST. And since AST is generated from the compiled source anyway as one of the first steps in compilation, getting it to emit a right set of platform-specific instructions means you can compile the original source there too.

And AST doesn't solve the problem of calling platform-specific functions and libraries anyway.


Sure, there are many (excellent) AST based editors. However an AST editor based on a keyboard, and requires you to learn to type at 160 WPM, won't help most tablets be good code creation devices.

Data structures are shapes. A shape is better drawn than described in text.


My point is - there are AST-based editors and languages (e.g. Emacs with Paredit and Common Lisp) and you can see that even in that mode of "thinking" about code, you can't beat the speed, efficiency and flexibility of the keyboard.

> Data structures are shapes. A shape is better drawn than described in text.

Draw me a linked list. Tell me how much faster it is than typing:

   (list 1 2 (foobar) (make-hash-table) (list "a" "b" "c") 6)
Even on a visual keyboard on a tablet, it's faster to type than to draw data structures. A flat sheet of glass maybe gives us the ability to get the (x, y) coordinates of a touched point easier and with more precision, but it sacrifices many other important aspects - like tactile feedback and the ability to feel shapes. With physical keyboard, you're employing more of the features your body and mind has, and that's why it's faster than a touchscreen.

Unless you can find a completely different way of designing UX, then a tablet won't be a suitable device for creation. None of the currently existing solutions come close to beating a physical keyboard and a mouse.


> Draw me a linked list

I don't normally use linked lists, but here's an array:

"list joe (subtle gesture) mary (subtle gesture) dave end

If I wanted to delete dave from the list I could grab it and slide it away or say "list delete last".

> Tell me how much faster it is than typing

Everyone in the room I'm in now can talk at 200 words per minute and use their hands. Very few of them could type that fast.


> "list joe (subtle gesture) mary (subtle gesture) dave end

How will you go about drawing "joe" and "mary"? Is it faster than typing? Note that you can't always select stuff from dropdowns - you often have to create new symbols and values.

> Everyone in the room I'm in now can naturally talk at 200 words per minute.

How fast they can track back and correct a mistake made three words before? Or take the last sentence and make it a subnode of the one before that? Speech is not flexible enough for the task unless you go full AI and have software that understands what you mean.


>> You could easily build an AST with gestures and speech

> How will you go about drawing "joe" and "mary"?

I'll just say it, it's easier. As I said at the top of the thread, gestures and speech.

> How fast they can track back and correct a mistake made three words before?

I gave an example of opening an existing structure and modifying it in the comment you're replying to.

> Or take the last sentence and make it a subnode of the one before that?

Like in a DOM? Easily: grab it and move it, just like you do it in DevTools today, except with your hands rather than a mouse.


> I gave an example of opening an existing structure and modifying it in the comment you're replying to.

Sorry, I misunderstood what you meant by "subtle gesture" there.

Anyway, in the original comment you said:

> Data structures are shapes. A shape is better drawn than described in text.

I'll grant you that speaking + gestures may not be a bad way of entering and manipulating small data structures and preforming simple operations. But until we have a technology that can recognize speech and gestures reliably and accurately (and tablets with OSes that don't lag and hang up for half a second at random), physical keyboards will still be much faster and much less annoying.

But I still doubt you could extend that to more complex editing and navigating tasks. Take a brief look at the things you can do in Paredit:

http://pub.gajendra.net/src/paredit-refcard.pdf

Consider the last three or four subsections and ask yourself, how to solve them with touch, gestures and speech. Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel? How about splitting a node into two at a particular point? Joining them together? Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for)? Searching in code for a particular substring?

Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction. There are stories on the Internet of blind programmers using Emacs who can achieve comparable speed to sighted ones. This usually involves using voice pitch and style as a modifier, and also using short sounds for more complex operations. Like "ugh" for "function" and "barph" for "public class", etc. So yeah, with enough trickery it can be done. But the question is - unless you can't use the screen and the keyboard, why do it?

> Like in a DOM? Easily: grab it and move it, just like you do it in DevTools today, except with your hands rather than a mouse.

DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.


> But until we have a technology that can recognize speech and gestures reliably and accurately (and tablets with OSes that don't lag and hang up for half a second at random)

Totally agreed. Theoretically, you should just be able to gesture a list with your hands and say "joe mary dave" and the software knows from your tone that's three items and not one.

I don't know that much about lisp and s-expressions asides from that it can edit it's own AST. That's not a way of avoiding the question, it's just my own lack of experience.

> Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel?

Yes, I already think of a tree of blocks/scopes when editing code with a keyboard, visualising that seems reasonable.

> Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for).

Here's the kind of stuff I use an AST for: finding function declarations and making them function expressions. I imagine that would be (something to switch modes) "find function declarations and make them function expressions". Likewise "rename all instances of 'res' to 'result'" with either tone or placement to indicate the variable names. More complex operations on the doc would be very similar to complex operations in the doc.

> Searching in code for a particular substring?

Easy. Have a gesture or tone that makes 'search' a word for operating on the document, not in it.

> Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction.

Yep, I don't think it would still resemble normal speech and interaction either, the same way reading code aloud doesn't. It would however be easier to learn, removing the need to type efficiently as well as the (somewhat orthogonal) current unnecessary ability to create syntax errors.

> DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.

Not sure if I'm reading you correctly here: typing DOM methods in a keyboard in devtools is obviously slower than a single drag and drop operation. Using hands to do it directly is obviously even faster with the mouse.

Stepping back a little: I guess some people assume speech and gestures won't get significantly better, I assume they will.


That's great if you just want the strings joe and mary. What happens if you want a list of People?


Off the top of my head:

favouritePeople is Person list, name Joe age 32, Mary 23, Steve 64, end

Using tone to separate entries, but you could use a secondary gesture for that instead. Also some pattern matching.


> I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.

Unless AI advances considerably. For years I've imagined myself talking to the small specialized AI living in my computer, giving it instructions that it would translate to code...


Natural language is a terrible way to specify software.

Writing software is about telling a blazingly fast, literal, moron what to do. The ambiguity inherent in natural language is not a good way of telling such a thing what to do.


>>AI advances considerably >blazingly fast, literal, moron

I think I have discovered the source of your disagreement.


I suddenly envision a "The feeling of power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power )-type scenario, where one programmer suddenly discovers that he or she can understand and create binary patterns without relying on the AI.


And if we _are_ Ima start buying Spotify ads that just shout out "semicolon exec open bracket mail space dash s space passwords space owned at gmail dot com space less than space slash etc slash passwd close bracket semicolon" at top volume.


Actually AST edition with touch interface has been experimented by MS Research with Touch develop (https://www.touchdevelop.com/). In their editor you just insert/combine AST parts instead of typing them.


Came here to say this. It works surprisingly well on phones and tablets, primarily for making single file scripts.


It depends. If you spend some time writing in Lisp, you'll learn how it is to write in AST, including navigating and editing it as tree and not strings of characters. And you'll see that the keyboard is still the most convenient interface we have for that. Touch, gestures and speech lack both speed and precision to be effective at this job.


You couldn't really code on an Atari 2600 or a Super Nintendo either, but all of us somehow turned out OK. I wouldn't sweat it.


>are computers doomed to be an expensive tool for few "nerd" ?

No. Because of the Glorious PC Master Race - mods, trainers, hacks, overlays etc - these all need dev and root access.

Btw- game modding, cracking, save game editing etc - are the best gateway drugs towards full blown IT career.


Word. I remember using a hex editor to alter a saved game. It would be decades before I learned exactly how hex works.


Well, because we are on an ubuntu thread, I will link you those just for the fun ;)

http://www.ubuntu.com/tablet/developers https://plus.google.com/u/0/105864202742705090915/posts/jNvZ...


Have you met an average user? The mandatory updates, lack of permissions and sandboxing are only a good thing for a user with typical computer literacy level.

Hell, even lack of window management in iOS/Android systems is making UX much more easier to understand for majority of users I know. My granddad, who was an excellent mechanical engineer, have been using computers for the last 20 years, and he still struggles with click/double-click distinction.


> and he still struggles with click/double-click distinction.

Have you tried teaching him that? I highly doubt an old person, especially one with engineering background, will have trouble with understanding the distinction if someone bothers explaining it to them.

Or in general - it's surprising how much non-tech people can understand about technology if someone bothers to sit down with them and explain the concepts to them. Usually the reason they don't learn this stuff themselves is the typical human impulse of "if I haven't figured it out in 3 seconds flat, it's too difficult and I won't understand it".


So many timesI've lost count.


The mandatory updates, lack of permissions and sandboxing are only a good thing for a user with typical computer literacy level.

Only if you want to keep them illiterate, which companies are more than happy to do since it means they can be more easily persuaded and dependent consumers.


People have had 25 years (Wild guess) to become literate, and they haven't. What makes you think that's going to change?


General (human language) literacy took centuries.


It probably isn't if the attitudes in present IT world continue. But it doesn't have to be this way - about the only thing needed to fix this situation is to create an expectation that yes, you have to sit down and spend 5 minutes learning before you can use this stuff effectively.

Somehow nobody complains that cars, or microwave ovens are too complicated. Everybody knows they have to learn how to use them - either through a training course or just by reading a manual.


Most people really do not care enough to learn past a 'just use it' detail.

Are my parents or family interested in password managers ? Heck no... why should they, because the browser will remember stuff for them.

Permissions ? You have to be joking... they want to read their email or draw a picture.

Computers are there to make life easy - they're convenience tools (for the mass market). If people have to understand them more than switch them on a press a few buttons, they've failed.

It's not the IT world... for years, we were outcast as geeks and nerds (they were insults in the past). It's that the average person doesn't want (or need) to know about this.

How many people service their own car ?


> Most people really do not care enough to learn past a 'just use it' detail.

True, but there is still some learning to do. The only way you can reduce it (barring solving general AI and making a system that actually knows what you mean) is by reducing the things a device/piece of software can do. That's what the industry is doing - cutting out features, turning software into shiny toys. Because from the market perspective, is enough that the people sign up / buy the product - it doesn't have to be actually useful.

That's why software for professionals look complicated - because there the company actually has to make a useful tool. This state of thing is sadly a big loss for humanity - if the only way to make stuff "sexy" is to make it barely useful, then the general population is in fact missing out on all the amazing things technology could allow.

(And the tech people are missing out too, because they're too small a niche. It's more profitable to target the masses instead. That's why all mobile devices are getting dumber.)

> It's not the IT world... for years, we were outcast as geeks and nerds (they were insults in the past). It's that the average person doesn't want (or need) to know about this.

Oh but it is the IT world. We've been invaded by the "normal people" and we've lost the battle. Most programmers employed nowadays are not much different from your average non-tech person, and have nowhere near the technical expertise you'd associate with the "geek and nerds" of the past.

> How many people service their own car ?

I'm not talking about servicing, but about driving. You have to spent 30+ hours in training to be allowed to drive on a public road. Nobody complains because people understand that to use the car well, you have to learn how to do it.


> either through a training course or just by reading a manual.

If I had to read a manual to operate my microwave, toaster, coffee machine, sandwich maker, oven, games console, etc etc, I'd just get rid of them.


You probably were taught how to use most of those by your parents, either directly or by observing. I find it hard to believe that any time you're dealing with a new class of appliances for the first time, you don't even peek at the manual or some tutorials.

I say class, because most tosters work the same, most microwaves work the same, most smartphones work the same and most 3D modelling programs work the same too. But you have to get that first little bit of knowledge about the class of tools from somewhere, even if from your own experimentation. Humans aren't born with knowledge how to use technology.


> Only if you want to keep them illiterate

You sound like a guy who teaches his kid to swim by throwing him in the stormy sea.


Have you met an average user?

I don't think anyone ever has.


What do you mean? I pick up my Android phone and I've got an app that gives me a python shell, "terminal ide" which includes tons of cli developer tools like a C compiler and various editors, a full debian install I use for more secure SSH (using real openssh) and development and even some operations on various servers. There are full out Java IDEs on Android that you can install even.

So here's just a few ways you can code on Android:

QPython: https://play.google.com/store/apps/details?id=com.hipipal.qp...

AIDE (Java): https://play.google.com/store/apps/details?id=com.aide.ui&hl...

Terminal IDE: https://play.google.com/store/apps/details?id=com.spartacusr...

If all else fails, just deploy debian with Linux Deploy: https://play.google.com/store/apps/details?id=ru.meefik.linu...

If desktops become more expensive, it'll just mean people are more motivated to make tools like this. Android phones and tablets are basically treated as cheap commodities and there's an extremely competitive market for them, if anything, the entry price has gone down.

Now, admittedly I'm not sure how this situation is on iOS, but maybe someone could link similar tools on there?


Sure, but how productive are you when coding on your phone vs on your desktop?


There's definitely a productivity hit, but it's also not the kind of thing that isolates people. For the cost of a cheap BT keyboard you can be fairly productive using just a tablet, even a phone maybe. If you have a TV and use casting, a phone could do quite well.


The contention wasn't that coding on a phone or tablet isn't productive, it was that you can't do it. I love Pythonista on my iPad, and the latest version makes coding on my phone surprisingly feasible. I wrote a version of snake on my 6+ with my kids.


I always find that comments like this are doom and gloom and never celebratory that we might reach a point where computers are finally stable and secure enough to be treated like appliances. The first automobiles required dozens of steps just to start the engine, did people back then lament the difference between "cars" and "mechanics' cars"?


Here's my blogpost on this from 5.5 years ago http://drupal4hu.com/future/freedom.html


I was a teen in the mid 1980s when computers were too expensive. The situation today is orders of magnitude better, it isn't really comparable at all.

For one thing, a Raspberry Pi is more powerful than the Sinclair ZX-81, Apple IIe, or Atari 400/800 I had access to back then, and much cheaper.


I think this split is going to get worse, especially in the Apple ecosystem. Their apparent desire is that the iPad and Pro are the computer replacement but there isn't (or will anytime soon) a way to create applications for those platforms from that (iOS) environment. Their, admittedly market-speak, statements on stage hint that they would like to see the the tablets/phones replace desktops for the larger userbase. Odd times.



This. It paints a unsettling picture for the future of general purpose computing.


What do you mean you "cannot" code on tablets and smartphones? There are nice interpreters and compilers in the official app stores for major mobiles OS, aren't there? I've used Python on iOS, Android and Windows Phone. Also J, Ocaml, some dialects of Lisp, C# and Ruby, that I can remember now (each language on at least one of those OSes, sometimes more than one). Not to mention these devices all come with web browsers which means at the very least you can use JavaScript (I've done at least one Project Euler question on an iPod Touch in CoffeeScript standing in line at the bank.)

The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)


Those are "second-class" or even "third-class" citizens in the ecosystem. Can you use those language interpreters and compilers to write apps that can interact with the system and exchange data with the other apps? That's what makes the traditional, document-centric, PC ecosystem so powerful.

While being able to play around with Project Euler can be fun, it amounts to "I can run a Turing-machine simulator" and doesn't represent anything more than a tiny fraction of what people want to do with computers when they say they want to "code". You may as well be playing one of the numerous puzzle games that involve much of the same concepts.

To use your iPod Touch as an example, if it were more like a traditional desktop computer, you would also be able to do things like write an app to manage your music playlists.

The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)

Not surprising if it's a Windows tablet based on the PC architecture - those are far closer to the traditional desktop than iDevices and Androids. If by C# compiler you're referring to the one that comes with the .NET framework, that's been there since the first versions; pity it's not so well known with MS trying to push VS as hard as possible...


> Can you use those language interpreters and compilers to write apps that can interact with the system and exchange data with the other apps?

Yes, you can. See https://play.google.com/store/apps/details?id=com.aide.ui


You can code on a tablet, same way as you can stand in line at the bank. Neither is very efficient, and most of us would rather not.


Apparently, you can't really do anything on a tablet or phone. Or at least I can't. My phone app chose this particular comment as a nice time to play up and do a double posting. Hence the copy further down, which I am not allowed to delete.


Well of course you can technically code on a smartphone or tablets, I simply meant that is absolutely not practical.


I tried to code on smartphone, never again. I am x times more productive on desktop.


I've recently started using Termux on my phone with a bluetooth keyboard - I'm as productive as I would be doing dev over SSH. All the tools I'd use on a server are there (node, git, nano, etc). I've written a small API server with it and it wasn't a disaster. Admittedly I'm more productive when I'm on my laptop with Atom and a couple of monitors, but if that isn't an option I can still do work. It's a bonus rather than an alternative.


Absolutely! Programming is much more comfortable with a physical keyboard. I wasn't suggesting coding on a phone is ideal, just that it is possible.


Edit: Just realized you might have meant _on_ the phone as in 'using the on screen keyboard'. Ug. That would be truly awful.

During one weekend in which my only options were android devices, I was pleasantly surprised by the packages available in termux. With tmux, git, and ssh installed, I mounted the tablet at the right height and connected a quality keyboard via usb. I actually forgot that I was coding on a tablet!

The phone experience was far more sensitive to maintaining good posture throughout, but being strongly incentivized to keep good posture actually made the experience more pleasant in a way. However, this particular phone was around 1280x720 I believe - seeing individual pixels again, and being pixel-limited (not physical size limited) in the use of panes in tmux were the only facets I found truly unpleasant.

I'm eager to try coding with a high res VR headset.


All of this will disappear within the next 5 years, as the distinction between programming and consumption narrows down.

It seems like a vast majority of software developers, consciously or not, do not wish for software development to improve beyond a certain point as they fear it would become too accessible and therefore lower the value of their skills. The truth is that we actively make programming as difficult as possible, and everybody loses. I can understand that writing code as text would make sense 50 years ago, but there is no excuse for this today.

Consumer UI is now reaching the 3rd dimension with AR and VR, while software development is stuck in the 1st dimension. A long linear piece of string. It is difficult to believe that those who have the power to create great consumer UX are completely blind to improving their own. Software development has some of the worst UX ever.

The solution to all of those issues has been known for a while, and is dead simple to understand. We need to create a new communication platform, powered by ideas from logic programming and the semantic web. Think of it as 2 huge semantic knowledge graphs, the first describing the real state of the world, the second describing the ideal state of the world. Build a UI on top of it (which should feel more like a graph-oriented Excel than RDF/Prolog) to let people, agents and IoT devices communicate "what is" and "what should be". Then, all it takes is an inference algorithm that can match providers with seekers, get them to commit to some set of world changes (through some sort of contract), and let people manage and track the commitments/tasks they're expected to get done. That's it, that replaces 80% of software needs. Thank you very much.

Knowledge Graph -> Semantic Marketplace -> Smart Contracts -> Task Management


Interesting ideas (even though your prediction regarding the next five years seems rather... bold). Where is this vision sketched in some more detail? Any links?


Half of what I ever said online is about this. Somehow, I never got to write a detailed description of the vision.

Perhaps I should take this opportunity to make that happen.


As I understand it, Microsoft has copied the Linux kernel system call interfaces and provided their own underlying implementation.

Given that Microsoft supported Oracle's view that the structure, sequence, and organization of the Java programming interfaces were covered by copyright law, then surely they would also agree that the same holds true for the Linux kernel system call interfaces.

I don't like the APIs-are-copyrightable decision, but given that's the current state, why aren't we talking about how this is a violation of the Linux kernel copyright license -- the GPL?


One could argue that the Linux syscall interface is closer to an ABI than an API, since you don't directly code against it. Don't know what implications that has in this context, though.

One legal thing that I'm also wondering about is the "Linux" trademark. I thought the Linux Foundation kept close tabs on how you were allowed to use the trademark, and one requirement was that the Linux kernel was actually involved?


> One legal thing that I'm also wondering about is the "Linux" trademark.

This probably explains why they never talk about Linux (at least I never saw it), but always about Ubuntu. I guess they have an agreement with Canonical.


They mention "Linux" all the time. The feature is called Windows Subsystem for Linux (Beta).


It is perfectly legal to mention someone else's trademark. Else we would have text in books like "He walked over to the bar and ordered a popular sugared caffeinated beverage made by a company headquartered in Atlanta". You just can't say something that might confuse the reader into thinking that you own the trademark.


But doesn't calling it "Windows Subsystem for Linux" imply that actual Linux is involved, or even included? Suppose I marketed a range of flavoured additives to put in you Coke and called it "Simon's Flavours for Coca Cola"?


IANAL, but that would be perfectly fine ... you are totally allowed to write a piece of software, and say that it's for Windows, or for Mac. Or manufacture a piece of hardware that's for iOS, or Android. Or an aftermarket car part "for Honda", etc.


But Wine isn't 'for Windows' and this Microsoft thing isn't 'for Linux'. It doesn't involve the thing trademarked as Linux itself at all, in fact it replaces it. It's as though I wrote my own OS kernel with a Linux emulation layer to replace the Linux kernel and called it "Simon's Linux" or "Simon's OS for Linux". Suppose Wine renamed themselves "Linux Emulation for Windows".


The "for Linux" part is their hedge, I think. I recall Microsoft forced a project known as "Ultimate Windows Boot CD" to change its name to "Ultimate Boot CD for Windows".


Maybe it should be called "Windows Subsystem for Linux Applications (Beta)"


I'd say that this part of the ABI is definitely something people code directly against - so there's really no distinction between API and ABI. The parts of the ABI you don't are things like the function calling conventions, type widths, etc.


Do you really think a multi billion dollar company like Microsoft wouldn't have their legal team all over this? Do you not think they would have researched this out. Discussed their implementation, and made sure everything they were doing was going to meet the GPL copyright standards?


This same "multi billion dollar" company had an AI bot tweeting Nazi propaganda a week ago. They spectacularly failed in their xbox one release, having to completely retool and regroup. Their Windows Phone efforts remain a complete disaster and are now doomed to failure.

The whole "they're a big company...don't you think they've thought of this!" argument (and its many "do you really think they'll lose?" variations) is always a fallacy. That doesn't make the argument about the copyright of ABIs valid, but at the same time the notion that Microsoft is big therefore they must be right is absurd.


Well if we really believe the bot was AI, then it wasn't Microsoft's bot. It was was it's own "artificial intelligence".

But the rest of those have nothing to do with their legal team. They wouldn't implement a copy of another OS into this OS without making sure it was legal to do so.


You would think that Google wouldn't implement a copy of the Java APIs in their operating system without making sure it was legal to do so, but apparently not.

Ozweiller is quite right. Big companies copy other people's stuff, breach trademarks (Metro?) and generally mess up all the time.

I doubt the ABI emulation is actually a problem, but calling it "Windows Subsystem for Linux" might well be a trademark violation as it doesn't involve Linux itself. Imaging if Wine called itself "Linux Subsystem for Windows". I think Microsoft would be deploying their legal team right quick.


I think the AI comment was more to the fact that, they didn't safeguard against seemingly obvious outcomes - such as internet trolls trying to get the bot to say bad things. Many companies put no-go words during username creation, hitler, racist words, etc - so why didn't Microsoft?

It might not have been simple to do, but still - hard not to see the outcome.


lol what the hell are you talking about. this thing is SUPPOSED to learn. you can't have ai and restrict what it learns, it defeats the entire purpose. isn't this the same thing that happens to people too? they go around the internet and soak up knowledge, sometimes racist, harmful, misinformation, but they soak it up nonetheless.


Well, to be clear, i didn't say restrict what it learns - i said safeguard against outcomes. Or, are you arguing that Microsoft knew the bot would slur racist insults in a laughably short timeframe, and only planned to run the bot for said timeframe?

The very fact that they had to pull the plug seems to suggest that it was not desired, and as such, it should have been safe guarded against.

An example safeguard being, limit what it can say. If it has racist/etc stuff in it, literally don't send to twitter. The bot still learns, the algos don't change, and Microsoft still gets to see what the given AI will behave like in full public. And above all else, the bot isn't a Microsoft branded Hail Hitler AI.

It sounds like you believe what happened is perfectly within reason - if that's the case, why do you believe they pulled the plug?


Did they even have any sort of filter? If they at least blacklisted these words [0], then that seems like a reasonable enough effort on its own. However, these developers would have had to be living in a bubble to not know about trolls from 4chan.

All in all, this is a lesson that some high-profile person/group eventually had to learn on our behalf. Now, when an unknowing manager asks why your chat bot needs to avoid certain offensive phrases because, "our clientele aren't a bunch of racists", you can just point him to this story. The actual racists are tame by comparison to what trolls will do to your software.

[0] = https://github.com/shutterstock/List-of-Dirty-Naughty-Obscen...


Now to be fair, we restrict what humans learn all the time. We try to teach morals and ethics to our children. We generally don't let kiddos run wild and learn whatever is around without some sort of structure.


Aside from the obvious outcome that it would be manipulated (which anyone could predict, and if well thought out it would have had "learning guards"), it didn't require some deep artificial learning -- you could tell the thing to repeat various offensive statements. It was just a giant miscalculation.

However the legal department of every company on the planet makes a risk:benefit analysis, especially in fuzzy areas like copyright law (which we've seen with the Java case....an API isn't copyrightable, then it is, then it isn't, then it is). The assumption that if Microsoft did it therefore it must be without risk is folly.


Sure, but that doesn't answer my question of 'why'.


Because it's a faulty premise? There is no license violation, that's why a license violation isn't being discussed (your question).


You make it sound as if the "law" is easy. Everyone can have their own interpretation of the law, and often those interpretations are complete opposite. That's why we have two sides in a court of law.

Microsoft's lawyers likely decided that the move is "worth the risk". But they wouldn't be able to be 100% sure that it's either legal or illegal anyway. You can only be 100% sure after someone challenges you in Court, and then judges decide a certain way.


Lawyers never decide that something is "worth the risk"; that's not their job. In this context, the job of the lawyer is to assess the legal risk, and it's a business executive's job to decide whether a risk is worthwhile.


Microsoft as a company culture is like the opposite from the "move fast and break things" (as it probably should when you're a platform company)

but legally speaking, they seem to have adopted that culture.


"As I understand it, Microsoft has copied the Linux kernel system call interfaces and provided their own underlying implementation."

Not sure your understanding is correct, but in any case is that not precisely what Wine does on Linux when running Windows apps? Are you worried about Windows copyright violations with Wine? From Wine webpage, "Wine translates Windows API calls into POSIX calls on-the-fly. . ." [1]

[1] https://www.winehq.org/


Windows doesn't provide consistency between versions at the interrupt level, and Wine doesn't provide any interface at that level. It's basically a COFF loader, and a bunch of regular userspace functions in DLLs that do everything.

Linux, on the other hand, provides exactly that, and this wrapper makes it so that you can actually run "movl $1, %eax; movl $0, %ebx; int $0x80" and it will actually call the equivalent of exit(0).


Yes Windows does provide such a thing: http://j00ru.vexillium.org/ntapi/

Would be very interesting to see a Wine based on this concept instead...


Of course it does at some point, but they're not meant to be used by the public, which is why:

* you're linking to a URL with leetspeak in it, and not MSDN.

* the codes change so frequently.

* some of them disappear or are renumbered, even in service packs.

If we wanted to do this for Wine, then it would also require emulation or something like OS X's Hypervisor.framework to catch system calls, which seems heavyhanded when we have working code already.


> If we wanted to do this for Wine, then it would also require emulation or something like OS X's Hypervisor.framework to catch system calls, which seems heavyhanded when we have working code already.

It could also be done as a kernel level loadable module in the sort of style this Linux subsystem is being done, which is more what I was thinking. Changing a few numbers for every Windows service pack (really, not even necessary if you only support certain versions of ntdll for example) might not be so bad compare to re-implementing bug level compatibility with every Windows API.

Calling Wine "working" in its current state is a bit of an overstatement.


All I know for sure is that the whole situation was a mess. See http://fortune.com/2015/06/29/supreme-court-oracle-api/

What's interesting to me right now is whether or not Microsoft is saying it's OK in one context (linux interfaces on windows), and not the other (java interfaces on android), and why are they different?


MS doesn't seem to think that long-standing programming practices have suddenly been outlawed, as it is continuing to implement Apple's proprietary iOS frameworks in Project Islandwood (https://github.com/Microsoft/WinObjC). The ruling on API copyrightability does not set precedent in any of the courts that normally hear copyright cases.


libc is LGPL. The "problem" you're highlighting here is EXACTLY opposite what the entire open source movement was designed to prevent. It couldn't be less of an issue.


I'm not talking about libc. I'm talking about the kernel system call interfaces. They are defined by the kernel, not libc.


IANAL but the kernel _code_ is GPL. Nothing forbids you to implement from scratch some functionality with a different license. As long as you don't copy and paste GPL code.

A system call is a function entry point. The code executed when those functions are called is GPL licensed, and Microsoft wrote that from scratch.


What I'm pointing out is the absurdity of the current state of law. It doesn't matter if Microsoft wrote all of the code from scratch, because according to them the definitions of the interfaces alone -- not code and not header files -- fall under copyright law.


Oracle's position is that their interfaces were Copyright (C) Oracle and to use them you had to abide by the license from Oracle.

The whole point of open source licenses is to make it explicitly clear what you're allowed to do with the (yes, still copyrighted) material. They too provide strict terms and require you to honor the license.


So what defines a derived work? If you implement from scratch, but in order to implement must look at the existing implementation in order to get it correct, then it's possible (probable?) that the new implementation is directly derived from the original implementation.

It's not like Linux has a published license free spec. Unless they reverse engineered the system calls (possible but omg I think that would have been hard), then I'd be willing to bet that this could be easily considered derived work from the GPL.


Well, that haven't been getting it correct, so they've got that going for them :)

https://github.com/kernelslacker/trinity/pull/9


Those are effectively "header-only" and clearly documented as GPL. This is the basis of the entire movement!


This new Microsoft is open sourcing everything, maybe this own underlying implementation is on the way to be opened (maybe it's open somewhere already).


This implementation is definitively a work derived from the NT kernel. If reimplementing and API propagates copyright, the only allowed licensing situation would be that MS releases the complete NT kernel under the GNU GPLv2 license.

This is not going to happen. And this is not a too bad news, because this might give us leverage (by estoppel) if MS ever wants to litigate against free software on the theory that reimplementing API propagates copyright.


If reimplementing and API propagates copyright, the only allowed licensing situation would be that MS releases the complete NT kernel under the GNU GPLv2 license.

The GPL is not that expansive, it only extends to programs built upon GPL-licensed stuff, not for programs that just happen to have a GPL application running on them.

  A compilation of a covered work with other separate and independent works,
  which are not by their nature extensions of the covered work, and which
  are not combined with it such as to form a larger program, in or on a
  volume of a storage or distribution medium, is called an “aggregate”
  if the compilation and its resulting copyright are not used to limit
  the access or legal rights of the compilation's users beyond what the
  individual works permit. Inclusion of a covered work in an aggregate
  does not cause this License to apply to the other parts of the aggregate.


I'm not talking about that. I'm talking about the hypothetical situation where an API itself is considered inheriting copyrights from the reference implementation -- similar to what Oracle argued about Java and Android.


Even if it was, the NT kernel wouldn't be affected, that's my point; only the translation layer would be.


But didn't they support Oracle's suit against Google's free software Dalvik on this very basis?


is it a derived work of the NT kernel if it's contained in a separate loadable driver? my understanding is that's where this system call interface is contained, and as I understand it, you can have GPL NT drivers..?


It is not very probable that only public interfaces are used to implement this subsystem. Therefore, we can think that the special exception of the GPL that exclude major components of the OS would not apply. It also explicitly won't apply in the first place because MS is absolutely in the case: "that component itself (1) accompanies the executable (2)"

(1) core NT kernel (2) Linux subsystem


I'm pretty sure you can use only public interfaces to implement this subsystem. You can do quite a lot to NT via the public interfaces, and it helps that the stuff in the linux subsystem basically only needs to work with itself (i.e. you don't need to send signals to Windows processes... but even if you needed to, you could use public interfaces to do that from the kernel...)


The new Microsoft is the same one that has been continuously and recently involved in copyright litigation. That didn't just go away overnight because they started a PR campaign.


It would be pretty weird if, for example, the SFC used the same legal argument as Oracle.


Perhaps it is an evil plot to get that ruling overturned by presenting an obviously beneficial counterexample?


Hmm. This is an interesting question. Who would the copyright holder be in this situation?


Yes but they gave money to the Linux Foundation so that makes it all betterer...

/s


I have to say after the initial excitement, I'm a bit disappointed about how this is implemented. Apparently, there is no or little interaction between the Linux world and the Windows world in this system. I don't see the benefits over running a classical Linux-as-a-process like coLinux, or something like Cygwin or MinGW.

The option to run unmodified executables is nice if you have closed-source linux binaries, but they are rare, and this is directed towards developers and not deployment anyway (where this might be a useful feature).

When I heard "Linux subsystem", I was hoping for a fuller integration. Mapping Linux users to Windows users, Linux processes to Windows processes etc.. I want to do "top" in a cmd.exe window and see windows and linux processes. Or for a more useful example, I want to use bash scripts to automate windows tools, e.g. hairy VC++ builds. And I thought it would be possible to throw a dlopen in a Linux program and load Windows DLLs. Since I don't need to run unmodified Linux binaries, I don't see what this brings to me over cygwin.

I am hoping though that this might be a bit more stable (due to ubuntu packages) and faster than Cygwin, and that it might push improvements of the native Windows "console" window.


Given that you've touched on two of the "No; but the old POSIX subsystem did." points ...

* https://news.ycombinator.com/item?id=11416392

... would you, too, agree with a call for its resurrection?

* https://news.ycombinator.com/item?id=11391841

* https://news.ycombinator.com/item?id=11391798


I would bet that it will only get better over time and include a fair amount of the things you are talking about in a few years time. It's obviously a big push to pick up the current "I use a mac for development cause it's unix" crowd, I'm sure they're taking it seriously and would want the support for developing in a unix style A+.


How will they pick up the crowd who chooses their OS because it's mostly FOSS and not a branch of the NSA used for mass data collection? Most devs I know care about privacy and it was one of the main reasons they switched from Windows.

Most Linux programmers I know aren't Windows devs as much as the MS shill team would like everyone on social media to believe.

You don't have to be part of the FOSS crowd to support FOSS. I'd wager the majority of the programmers you know would be ecstatic for Windows or OS X to go open source, and if they use OS X/iOS they probably do care about their privacy.

I don't know a single developer that uses a windows phone or a Windows workstation purely out of choice, most devs I know that are ingrained in Windows are using it because they have to.

Stack overflow statistics show that programmers disproportionately choose OS X and Linux over Windows when compared to typical desktop usage (Linux use skyrockets among programmers compared to desktop).

These "Linux programmers who want Windows" only exist on the internet as far as I can tell. No one actually wants to use Windows.


My anecdata is different. Most programmers i've met in my career aren't part of the FOSS crowd, they own iPhones and MacBooks and happily run Software from the Mac AppStore for their dev purposes, or are on Windows, have never built a tool they use from source and without a line by line guide wouldn't be able to do that on Linux.

The few coders that I know that are part of the FOSS crowd, have their ThinkPads or Dell XPS Developer Editions with Gentoo,Ubuntu or Arch. This isn't for them, its for the ones already on Windows or OSX.


Except if you expect a snappy UI.

OS X and Linux with X11 or even Wayland are easily 20 years behind windows in terms of UI responsiveness and snappiness. The constant input lag of unix desktops always drives me back to windows for my workstations, although I am a fullheart windows server hater and despite everything that comes with it. It's stable, extremely fast and when edited/hacked right also privacy aware.


I use a mac alongside a win10 machine, with the win10 being brand new i7 hardware with ddr4 ram. And OSX's UI is far far snappier and hangs quite a bit less. My win10 machine often will just hang for no reason at all for a few seconds, then pop up whatever I had switched to.


Which GPU are you using? I have a 2015 Mac Pro with two D500, before I had a mid tear iMac. The mac pro stands directly besides my "beast" a i7 6700, titan x, nvm custom machine.

I can't really describe it, but it's bothering me and I would put my hand in a fire betting that it's somehow measurable. It feels like the input lag of a cheap IPS-Monitor. Every klick, every slide, every window resize has this minimum lag of maybe 10 to 50 ms, worse on linux. That's hardware independent, because that was already bothering me with a MacBook Pro and my old iMac. I haven't booted OSX in years to work production because of that tbh (Every piece of apple hardware I own is booted 99% of the time with Arch or W10E) but did it just now and compared it side to side. It's still there. It's driving me crazy. I know that I am hyper sensitive for lags and stuttering because of my former quake carrier.


I have replicated this by cmd-tab'ing to a cold app window. But if I just tab between say messages and firefox, I can't detect even the tiniest pause. If I click on a new tab in firefox, I see the tab add button color change instantly even if the tab takes 50ms to open.

On my beast Win10 machine, everything is also instant, it's just randomly throughout the day, it will lock up while tabbing to an app or something for 10 seconds.....which is just so weird and annoying. CPU shows it is pegging a thread for the opening app, doing who knows what.

My mac is a 2013 MBP with a intel 5200hd.


Microsoft has comprehensively undone any progress they had here; windows 8 and beyond are a sludgy, slow, unresponsive mess owing in part to their incomprehensible reimplementation of much of the user shell in XAML.


Mapping the processes across implies all sorts of strange things - what happens if you try to send a Linux signal to a Windows process?

Mapping the users is possible and "SFU" did this, with a couple of caveats (Windows requires group and user names to be different, while UNIX systems often have groups with the same name as users).

I don't think this is a Linux or GNOME killer, but it might put a dent in Cygwin and git-bash.


Wine somehow solve that. Even if almost nobody use that Windows application still able to use native APIs if it's detect that it's running in Wine. E.g for example Windows Steam client checked Wine version long before native Steam appear.

I think Microsoft can do something similar.


Windows does have signals, just not nearly as many as Unixes, and are mostly built around how to kill a process. So I imagine all the non-terminate ones will be mapped to just be ignored.

https://msdn.microsoft.com/en-us/library/xdkz3x12.aspx


> what happens if you try to send a Linux signal to a Windows process

Performs the default action as if it were a Linux process. Mostly terminate or ignore.


Hmm, I guess I'd assumed that I'd be able to use bash scripts to automate Windows functionality (that was probably the most exciting part for me!). You're saying that's not currently possible?

What about more basic things, like moving files around, etc.?

I'd be happy if I never had to write a (Windows) batch script again...


Windows files exist under /mnt/c or something. I think you could.


> And I thought it would be possible to throw a dlopen in a Linux program and load Windows DLLs.

No. Execution mode incompatible - see e.g. https://github.com/wishstudio/flinux/wiki/Difference-between... for details.

What would really interest me: how was fork() implemented by MS here? The same method as http://stackoverflow.com/questions/985281/what-is-the-closes... or have different interfaces been created?


Since NT kernel supports fork, I guess that wasn't a problem, since the bash mode doesn't use WIN32 API (which is lacking the fork() function).


I can confirm that you can run (at least some) GUI apps if you start an X server on Windows (like Xming, etc.), and export DISPLAY.


Oh man, thanks for the tip! Works wonderfully. I just apt-get'ed synaptic and it seems totally functional :) Xemacs and Angband don't work, but the fact that so much works already bodes pretty well for the future.


Wait, someone still uses Xemacs? I think you're the first I've run across in a long while.


I currently don't have access to a Windows box, but am currently working on a CLI app in Swift for OSX and Linux. It would be interesting to see if this effectively makes swift cross-platform "for free".


yay! \o/

can you check what happens after you wake from sleep/hibernation? are those apps still fully functioning?


I did not try GUI stuff, but when my tablet went to sleep while executing a long-running command, when I returned to the bash shell, the command failed with a message stating "interrupted syscall" or the like. Not sure if this is the common/intended behaviour.


This frankenstein contraptions isn't going to work correctly until they add so much bloat it's not even worth it.


Love it


Can you please test graphics with LLVMPipe?


I'm using xming as well.

glxinfo reports

  OpenGL vendor string: VMware, Inc.
  OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.4, 256  bits)
  OpenGL version string: 2.1 Mesa 10.1.3
  OpenGL shading language version string: 1.30
glxgears works for about a second, then crashes:

  XIO:  fatal IO error 11 (Resource temporarily unavailable) on X  server "localhost:0.0"
         after 732 requests (732 known processed) with 0 events remaining.
When i run it from strace, it keeps running.


Thanks! It's great that it's works.


I wonder who came up with the "Bash on Windows" tagline. That was a really smart idea. I think most of us would have run with "Emulated Linux syscall layer from user mode processes on Windows". Promoting bash specifically seems to me like engineering marketing genius -- less technically knowledgeable people are more likely to be familiar with bash, while the more knowledgeable are going to think "wait...what? how do they do that? that would mean...", which works better than simply saying what they have done.


Is this sarcasm? Bash on Windows definitely comes before "Emulated Linux syscall layer from user mode processes on Windows" ... it's a great name, sure, but marketing genius?


It's probably not a reference to https://news.ycombinator.com/item?id=11391931 , but see that anyway. Naming is up for discussion according to the developers.


Not totally clear to me, since most of the devs around here saw it and thought more along the lines of Cygwin. This is a much bigger deal than Cygwin!


Smart move by Windows. I guess that developer usage of an OS ultimately results in developer developments for the OS, though I don't have any number for this. It seems to me that a lot of developers, especially at startups, have switched to OS X with its shiny GUI and UNIX compatibility. I'd hazard the guess that this will ultimately result in OS X becoming more of a developer target over time. Initially for developer-related stuff (see Dash as an example that is only available for OS X (and Zeal for Linux)), but later probably for other stuff as well.

What's illustrative for the dominance of *NIXes in development are the number of projects on Github that contain only +NIX installation instructions and no Windows instructions (again, anecdata).

So if Windows wants to remain competitive, they need to retain developers. And as the +nix way of developing seems to be dominant now in quite a number of fields, Microsoft needs to adapt.

Why, you're asking, do I think that the +NIX way of development is dominant today? In a nutshell, Web -> Unix Servers -> POSIX shells -> Languages that work best with POSIX -> OSs that are POSIX-compliant.

Edit: Asterisks don't work as expected here. At least not in a Markdown-compatible way.


Is it that smart? Being developer friendly sounds like just plain common-sense, not some genius breakthrough. The question should be more why has it taken them so long to get to this point.


They always tried to be Microsoft developer friendly, and I think sort of assumed that UNIX was going to go away when they won. But it's now clear that UNIX has won for web services, and the web has beaten old-style client-server. And the Windows remote admin/cloud admin/mass deployment features appear to have lost as well.


Maybe. It is definitely the common-sense thing to do today, five years ago, it would have been smart. From a pre-Nadella perspective, you could have called it revolutionary, but now we're used to Microsoft participating in OSS, so it's much less so.


Wasn't Ballmer's "developers, developers, developers" chant more than five years ago?


I'd just like to interject for a moment. What you’re referring to as Windows, is in fact, GNU/Windows, or as I’ve recently taken to calling it, GNU plus Windows.


Just remember to attribute this particular flavour of this well-worn running joke to Mike Gerwitz.

* https://news.ycombinator.com/item?id=11446696


at MS they're just waiting for hurd to be available


I call it NSA/Windows.


Windows 5 Eyes ?


So 2016 is _finally_ the year of Linux on the Desktop?


This brings many of the parts that make up Ubuntu to Windows, but the Linux kernel isn't one of them.

That being said, I'm typing this comment my workstation running Linux and I for one am getting very tired of this year of the Linux desktop joke.

What OS you run is an individual choice, stop trying to declare a single winner.


Linux users were going around for a while saying "this is the year of linux on the desktop", and yeah it kind of turned into a bit of a meme.

Realistically linux did hit it big, but on a phone OS. It's now one of the most installed kernels in the world, but its brand is hidden. Linux is also incredibly important in the server space, and everyone knows this.

Linux will never have its year on the desktop in my opinion, but it will still be all over the place in the server/phone space. It just won out in other areas than the desktop.


> Linux users were going around for a while saying "this is the year of linux on the desktop", and yeah it kind of turned into a bit of a meme.

They were not actually, this is a myth. A few tech "journalists" wrote such articles which people started making fun of. But no, regular Linux users never claimed that, or at least not in any significant number that I know of.


>> I for one am getting very tired of this year of the Linux desktop joke

You != masses.

There might not be a clear single winner, but there is a clear single loser. Statistically speaking.


Only if you understand desktop market share of an operating system as a battle.

Myself and a lot of other people are using GNU/Linux and other libre operating systems with great pleasure and, finally, growing hardware support. I could not care less if 90% of desktops are Windows systems or if an additional 9% are OS X machines or whatever.

tl;dr: Just use what works for you. If it supports your ethical values, itäs even better!


Sure, if all users believe in "use what works for you", they shouldn't get pissed off when others make jokes on that. *nix users are not going to like it, but in real world comparisons are bound to happen for things of same categories and you can't prevent that. Better to move on, and that was the reason I wrote my comment.


A bit offtopic, but 2009 was the year of the Linux Desktop for me (using Ubuntu). Performant on older hardware, eye candy (compiz) on newer hardware, the Gnome 2 interface was familiar and incredibly polished, and you had a lot of choice (linuxshouldbeaboutchoice.com).

They lost me with all the rewriteritis and monodaemonisation that followed. I switched to MacOS (hackintosh) and was very happy for a while, since it could run all the Unix stuff, most of the productivity stuff (MS Office), and many games. It was for a long time the most plain, conservative OS (while Windows was going crazy with 8).

But recently, I've found Windows to be the OS that "just works" and gets out of my way - which was pretty surprising to me.

If anybody killed alternative desktops, it is not MS, but the desktops themselves.


>I've found Windows to be the OS that "just works" and gets out of my way

I've had the opposite experience. Windows does not "just work" and it certainly does not "stay out of the way".

I have USB headphones I can't use in Windows because they connect but Windows doesn't let me switch to them. When I plug in an external monitor my OS comes to a crawl and it doesn't speed back up until I restart the whole thing. When I unplug a monitor it loses my windows.

And did you hear the story about the guy who lost his job because Windows decided to update the .NET framework right before he was scheduled to do a presentation at a business meeting? Doesn't sound like Windows stays out of the way to me.

I wish Windows "just worked" but it doesn't. It breaks all the time unless you're a power user. Giving my parents Linux was the best thing I ever did for them because it turned their laptops from a source of constant frustration to an always-on communication machine. We went from hundreds of ads and dozens of toolbars on windows to a Linux machine that just works.

Now I'm just trying to get my dad to switch to Linux for work so he doesn't have to install his printer drivers again every time he wants to print something. All he uses for work is Chrome any way.


> I have USB headphones I can't use in Windows because they connect but Windows doesn't let me switch to them.

For most people, this would be a problem with the USB headphones, not with Windows. On the other hand, if the USB headphones work well in Windows but not in Ubuntu, then it's a problem with Ubuntu, not the USB headphones.

This is why it's impossible to have a rational debate about the state of the "Linux desktop".


Totally agreed. 11y OSX user, tried to give Windows 10 a chance as the host system on a $2800 tablet (Vaio Canvas). HiDPI still a mess, tablet mode is worse than ever because it doesn't automatically bring up the keyboard for most apps (and even hides the keyboard tray icon by default - wtf?), system settings are complete and utter shit compared to OSX or even Windows 7, touchpad is unusable as always. Only good thing is the nice looking hardware, especially the stand, but it's tainted by the hideous power adapter (and very short batt. life no matter how you use it).


2007 was the year of Linux on the Desktop - with netbooks, Linux was literally competition for Windows on the desktop for the first time and Microsoft had to make XP super-cheap.


Pretty much, finally MS decided to make it happen. :) On a more serious note, you can get all of the benefits of windows and linux in one without docker or any vm running. This is great!


Let's say it is like getting a seat at the big Microsoft Desktop table.


No, Linux on the Desktop has just died. I expect both the KDE and Gnome projects dead within a (very few) years, probably X.org close behind.

All hail Winux though. (That's the name for this mix I came up with.)

Before you downvote this without thinking ... consider, for example, KDE is severely understaffed and this will deplete them further. Who will bother with X.org bugs and drivers now? What's the point? Who is your target audience? You need to drink a real big dose of Stallman kool-aid to continue with Linux if this thing on Windows works as promised.

I have been using Linux solely on my laptop since 2004. I am sick of the constant driver problems. Yes, yes, you can connect to your home router or the router in the cafe. Now go and try and connect to an enterprise network. Perhaps with VPN.


KDE has always been understaffed and will always be understaffed.

But it has been improving all the time with every single release. The problem is that you (and millions of people who were looking for Linux desktop to "win") are just not excited anymore since new form factors (phones, tablets) arrived.

But I actually think Linux desktop is a winner. There are several high quality desktop enviroments suitable for all kinds of use cases.

Yeah we are not dominating the world. That was a short naive dream in the early 2000. But we have awesome desktops and thats what matters.

Disclaimer: minor KDE contributor but these were my thoughts not KDE's.

Btw, give KDE a try its so good these days :)


I am typing this from KDE. Obviously? I guess. And so what. That is not the problem.


I do all of the things you mention without problem. I don't definitely don't think GNU/Linux will die as a result. First off, syscall emulation will always be clunky. Secondly, many people care about their freedom. Thirdly, what makes you think that a majority of people using GNU/Linux will switch. I haven't had driver or network problems for the past 3 years on any of my various machines.


Same here. I use Windows maybe once or twice a year, and it's always a pain. "Winux" won't change that for me and to be honest, I don't really want to support Microsoft. Sure, I think VS is a pretty nice piece of software, but that's about it - too few reasons to switch.



No need to be flippant. Yes, I read those comments and I don't share those experiences. I've used my fair share of odd hardware and I've never had problems that couldn't be resolved without half an hour if Googling.


Keep in mind Windows 10 let's the folks at Redmond remotely remove software from you machine. Before you go declaring Linux dead you might want to think about how that could impact you. Not to mention that Microsoft has been known to Embrace, Extend, Extinguish.


This is true for every single popular OS though -- Windows, iOS, Android. Not that it's good, just that's the reality.


Yep. But not true on Linux, yet.


> Linux on the Desktop has just died.

It becomes even more approachable by "Winux". Let people learn the basic of the CLI and get comfy with more open source tools -- then reinstall your computer to a Linux distro (and put your Win-only apps in a VM or on Wine) is a small move.


I highly doubt that "Winux" will really result in a large increase of people learning CLI tools. People that want to use those tools already have way that they're doing so. Consider that most people have difficulty using a basic word processor, much less Unix tools.


I'm considering the masses. But many in higher-education are on Windows (my experience here in the Netherlands). I can imagine that, for instance, they will use Winux at some point.

Same for some IT professionals that use Windows (either since their job demands it, or out of preference). They might install Winux at some point to get some aspect of their work done faster. Again a lower barrier to get your CLI skills up and get comfy with common open source tools.

I believe there is a lot of value in "CLI skills and common open source tools" that Windows users are currently missing out on.


And why would they switch to full linux when they can get their cli tools AND Windows productivity?


Security, parity with the server platform, rough edges of Winux, "cool", wanting to learn something... Just to name a few.


As I understand it, vendors will probably have paying clients for Linux desktops for some time to come. It is very hard to imagine a situation where the demand for a Linux desktop become so low that no-one will maintain the required software infrastructure.


>Winux

I've deemed it "Frankenstein OS" because they've sewn a whole bunch of parts together to make an unwieldy monster that doesn't quite work as good as the individual pieces did on their own.


I'll be interested in hearing from anyone who uses this and finds it offers them more than they are currently getting from cygwin or VMware+Linux VM. I realize it's a very different beast from cygwin - an entire User Mode Linux environment, as opposed to being able to download windows versions of the Linux Environment, but, on a day to day basis, It will be interesting to see what people do differently, and why they would use WSL as opposed to just running a Linux VM on their workstation if cygwin isn't sufficient.


I'm using it on my tablet instead of previously using a VMWare image and msys+mingw+cygwin.

The main advantage over a VM is no resource partitioning: on a 4GB RAM tablet with 64GB eMMC, you can't allocate more than 2GB RAM to a VM without trouble, and putting 20GB of disk aside for it is also a pain., and much improved power efficiency (even an idle VM drastically reduces battery life, while Ubuntu for Windows doesn't).

Compared to Cygwin: a lot more packages are available, a lot more just works out of the box, and you can painlessly use online tutorials for Linux, which often assume Ubuntu and don't consider Cygwin a target platform.


since you are already up and running may I ask what friends have been asking me? Can you apt-get install zsh ?


Doesn't look like zsh is fully working yet:

root@localhost:~# zsh compaudit:105: wait failed: invalid argument compdef:95: wait failed: invalid argument localhost# exit zsh: you have running jobs. localhost#

Will have to try again after the next build.


Have you tried it in another terminal emulator (like mintty) or an X session (using a X running on Windows or a VNC X server)?


Ask away. I don't have the tablet with me, but I can answer what I know so far.

I haven't tried zsh, but I'm pretty sure you can install it. I installed a bunch of applications, including using third party repos and ppa's. I don't see why zsh would not work.

Perhaps zsh uses some unsupported escape sequences (for instance, screen doesn't seem to work), but you can readily work around that by using another terminal in windows (mintty) or launching a VNC server from Linux and a VNC client to your localhost I assume.


The reason that zsh gives the errors reported earlier in this sub-thread will be different to the reason that screen doesn't work.

* https://news.ycombinator.com/item?id=11417227

It's readily apparent from the error messages earlier in this sub-thread that the zsh problem isn't to do with escape sequences. And of course the screen problem (at least the one known so far) is not escape sequences, either.

* https://news.ycombinator.com/item?id=11416392


I couldn't get zsh to work (same errors as tallanvor) but other shells did work, like fish with oh-my-fish. I had to run bash in ConEmu to be able to display some unicode characters (those used by powerline themes/fonts) - http://i.imgur.com/JT8Zi5j.png


One serious advantage to WLS over a VM, is you don't have yet another layer of NAT at play... That's the biggest one to come to mind... you also don't have a fully linux system spun up to work on a few tasks in a bash prompt.

Don't get me wrong, I've been fine using a "remote" VM machine locally for linux, and a lot of my work the past few years has been that way (CIFS in the VM, to run a gui editor on the desktop), but to be able to run closer to native is a good thing imho... hopefully it stays well supported.


My potential use case: run the IDE on the host (Windows UI is fast, font rendering is great), but use git, etc. from the Linux command line (to get file permissions, etc. right).


Install git (the real git from https://git-scm.com/, not a fancy GUI) and use the git bash shell today. It runs msys and has a full set of Linux utilities like ls, etc.


The powershell extensions in Git Shell (not git bash) are also fantastic and less clunky (doesn't feel like mingw). Comes bundled with github for Windows.


I've been doing that for a while, I setup Samba in the VM, and map a drive in the windows host... Works pretty well.


I would love to see how it compares to a Linux VM for tasks that involve a lot of shelling out to run other tools.

The native Windows versions of git and CMake can be awfully slow [1].

[1]: https://gist.github.com/jibsen/7ebeddde3bc2bfd421b96ae53a824...


Can anyone comment to how nice or awful running some sort of Linux VM (maybe under Hyper-V) and using Putty to SSH to it for development on Windows would be? This work is promising, but doesn't appear "quite there", yet. I run OSX now, but don't really ever develop directly on the machine and am mostly SSH'ed to Linux hosts for development.


I'm using a combination of Ubuntu Server 16.04 VM (4GB RAM, 2-cores, 80GB HDD) + Virtualbox + MobaXterm (best SSH client on Windows). It works beautifully. I previously used Putty but it's a bit sparse regarding some options and gets confused sometimes when sending key codes via SSH session. Also I get the option of either using the Vbox shared folders or using the included SFTP client in MobaXterm that follows the current folder you're currently located in the terminal. It's a simple drag and drop. It even has a XServer included so I can run Linux desktop apps on Windows.

I got tired of running Linux directly on my desktop. Compiz crashes (worked in 14.04, not working in 14.10, started working again in 15.04, not working in 15.10...), can't keep my select sound output, terrible font rendering, awful HiDPI support, graphics drivers are a mess, and on and on...

Since I basically just need the non-graphical parts of Linux (Bash + Tmux + Vim) I'm very happy with this setup.

PS: Forgot to say that I have a very beefy machine (Skylake Core i7 4Ghz, 32GB DDR4, fastest consumer NVMe SSD from Samsung) but I've found that this setup works well on basically any machine.


It works not bad, and I did it for some time. The biggest pain points are the lack of integration with the file system (I always forget how to set up shared folders) and that it can be hard to set up a decent ssh client. I use Cygwin ssh and mintty, and then ssh into the VM or external machines. Also there are the usual performance and space issues of VMs.

For this use case, "Ubuntu on Windows" seems to be a nice improvement.


Hyper-V can work well. I use it very frequently on my Surface Pro 2 with Win10. Currently I'm running FreeBSD 10.3, though in the past I've had Linux distros running in Hyper-V and that was good too.

It's a very convenient arrangement for developing web apps/servers. I have the servers running on FBSD which can be accessed from a browser on the Windows host. This perfectly replicates connecting to a remote server, so when the app is working properly and committed to the repo on the remote host, it's almost guaranteed to work as intended!

When I acquired the SP2 I originally thought I'd dual boot. Turns out that's difficult to accomplish, but in a way using Hyper-V is better because I have Windows and FBSD running at the same time. Of course the VM imposes limits so not optimum for every purpose but good enough for my uses.

I should mention that Cygwin is installed as well. Runs Bash nicely in a terminal, and a good way to ssh to remote servers as well as FBSD in Hyper-V.


Our team uses a Vagrant box (on VirtualBox) as a development server. Getting it to run on Windows requires a couple extra steps compared to OSX, but it's not too bad. You will probably want to use an alternative to the VirtualBox filesystem sharing though, as it's extremely slow. We use a combination of rsync (installed on Windows via cygwin) and NFS (using this plugin for Windows: https://github.com/winnfsd/vagrant-winnfsd).

msysGit is also handy when using a Windows host.

Rsync is straightforward and has fewer permissions issues to work out when sharing with Windows, but is uni-directional. NFS is a true share rather than a sync, but restricts what you can do to the directories on the guest in terms of permissions. There are other options too, like bindfs and unison, but I haven't personally explored those in depth.


It works amazingly well. Vagrant helps a lot setting everything up. I even used several VM instances at the same time trying to mimic microservices running in different servers. I edited all the files in my fave IDE running in Windows, a file change would trigger an automatic server restart on the affected service/VM. Debug worked just fine as did mobile debug using Vagrant Share. It's my workflow for web stuff.

I sound like a Vagrant fanboy or shareholder but I'm just a very happy dev since I started using this setup.

https://www.vagrantup.com/


Yep. I've setup Vagrant for my development server environments, and I use the vagrant-digitalocean plugin to deploy to DO. It's easy and convenient (though my host system is also Ubuntu).


I did this for 2013 - used Windows 7 because of the Toshiba brightness bug in Linux (until Windows rot annoyed me more). I started on PuTTY but rapidly switched to Cygwin and ssh.


What a time to be alive! I'm holding out on upgrading to Win10 until I buy a new PC since my 7 -> 10 upgrade ties to hardware, but I hope to have that done by the end of next month. I can't wait to try this out.

edit: Specifically, I want to understand to what extent - if any - will it allow some of the horror problems you have working with certain Python libraries (compiling Numpy on Windows is like pulling teeth) to be a thing of the past. I'd be more than happy to work in WinBash for Python if it means having the easy Linux install processes available for some of the more scientific packages.


If anything it's going to make it worse. When you type python in a command prompt which version is going to run, the windows version or the Ubuntu version? Even worse when you pip install a package what pip are you running, windows or Ubuntu?

Python on Windows is painful mostly because of the amount of binary packages that have to be compiled since distributing binary packages hasn't been in vogue until only recently with Python. You can save a ton of trouble using something like Anaconda, or honestly just run a Linux VM. If you're compiling numpy you're doing something wrong IMHO--use a prebuilt version that's optimized for your processor (ideally using Intel's commercial compiler with full SSE, etc. optimizations).


Command prompt should run the windows version and bash should run the linux version. Why would there be an issue here?


Exactly. The Linux file system will have Windows mounted into it but I think Windows won't be able to see the Linux filesystem. We will see.


Linux and Windows can both see each others filesystem, but they are visible at specific mount points in each environment.

You can't just use /home/chx/todo.txt as a path from any Windows application, but you can find that file through some other path.


Very interesting. What about case sensitivity?


The underlying filesystem (NTFS) is case-sensitive, so I think it should basically work fine. Sure, Windows tools are case-insensitive, so if you use bash to create foo and Foo in the same directory you'll probably only be able to access one of them from Windows Explorer, but I doubt that's much of a problem for most people


It does seam to be case sensitive! Creating two files called a and A works. They even show up in explorer. It doesn't work to delete one of them from explorer though. No errors - they just reappear. After trying, I now have a file called a that I can't delete from bash or explorer!



So will the bash prompt not see any tools like windows python in the windows system path? That seems like an odd decision if so.


You will find windows executables mounted just like the rest of the windows file system, so in bash you can find /mnt/c/windows/notepad.exe but you can't execute it from the bash subsystem (I presume).

The opposite is also true: the linux subsystem files are mounted under a regular directory in windows, so you can see all the files but from the normal windows subsystem you can't execute the linux binaries.

It means there is a big wall between the two systems, and you can't really automate windows things with bash instead of PS if you wanted. At this point though I find that to be a benefit - It would be fantastically confusing if you typed "find" or "python" and had to wonder whether a linux program or a windows program would actually execute.


I wonder how hard it would be to implement PowerShell Remoting for this environment


It's already a thing of the past. Since numpy & co got packaged as binaries in .whl wheels, you can just pip install them on windows.


You have to find the .whls though, you can't compile from source. That's fine, but it's a second class solution.

Moreover, getting stuff like OpenCV to work is a pain, and I find that the deep learning packages (e.g.: theano) get even worse.


Maybe it's because I haven't been following this very closely but I'm confused. Does this mean I can do things like compile Haskell or OCaml from terminal as easily as I do on my Linux install? Can I use apt-get?


Yes, that's the plan. This is a syscall translation layer. In theory everything should run -- or most. I would not expect wireshark to run for example but I have very high hopes for autossh for example because Scott Hanselman have shown Redis running so higher level networking is there.

You might need http://www.straightrunning.com/XmingNotes/ for GUI.


This is really good news. I rushed through my new laptop purchase and forgot to check the wireless card. Turns out the linux driver for my Realtek wifi causes a soft CPU lock up so I've been stuck on Windows 10 and doing work in a VM. Not nearly as fast and smooth.



Yes


Can I use Docker?


I would be shocked. As I mentioned above, I wouldn't expect wireshark to run either. From what we gathered so far, this is a syscall translation layer and

> No new system calls are added for cgroups - all support for querying and modifying cgroups is via this cgroup file system.

Per https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.t...


No cgroups = no systemd.

Windows: a better linux than linux.


There are lots of reasons that systemd will probably never run on the Windows NT Linux subsystem, a few reasons that upstart won't run, and the possibility that one could run daemontools family toolsets with a fair degree of success; but a likely overall problem with invoking a service manager such that it is in a daemon context (outwith a login session) in the first place.

* https://news.ycombinator.com/item?id=11416376


Very informative linked post. Would recommend everyone click through and read it!


I am a new-ish Linux user. Can someone explain to me why systemd is so bad? Is it because it monopolized so many features? So it's not UNIX way of tiny programs working together? I feel like I am missing something.


You basically answered it yourself :) that's the typical argument, yeah. I personally really like systemd, and it's made our infrastructure at work easy to manage and deal with.


Its complicated.

Unix was built around the human readable output of one binary being the input of another.

dbus, the carrier for much of the traffic between systemd parts, is far from human readable.

Also, one reason they give for developing everything in a single blob of code is that they can then change the protocol as they see fit.

This in turn makes it hard for third parties to replace a component, as they will constantly play catch up with the systemd developers.

Take logind for example. It depends on systemd-init being there and handling cgroups. Consolekit, what logind replaced, could be used on top of any init.

Gentoo forked udev into eudev after the former was merged with systemd, because it became a right pain to extract udev from the larger systemd code, even though at the time of merging it was promised that udev would still be usable separately. This because at every systemd release, the extraction process changed in some way or other.

With the traditional unix tools i can probably pipe some output from a GNU binary into busybox into a BSD binary and get the expected result. And if i don't i can break down the chain into parts, look at what they produce, and make adaptions right there in the terminal.

Anything similar for systemd will require a compiler and specialized tools for debugging dbus and whatsnot.

Maybe all this is fine in a devops environment where everything is in containers or virtual machines. But Linux got where it is because it was not just flexible, but also field repairable thanks to its unix heritage.


Microsoft hasn't announced this (yet), but syscall translation seems to be really well positioned for their container push. I fully expect that MS will announce Linux containers running on Windows Server using WSL. Already the perf is there.



What on earth? How is that even possible? o_O


There is basic emulation for /dev and /proc too but it is minimal.


No. Among other things, Docker depends on the namespace flags to clone(2), which the Windows mapping layer surely does not support.

http://man7.org/linux/man-pages/man2/clone.2.html


Linux docker? I doubt it--there's a lot of low level kernel stuff docker needs like creating a chroot. That said MS has been working on a docker version for Windows. I dunno if they ever released many details on what exactly it is though, it's been a while since I've much from them on it.


Docker does lots of other complicated things (creating namespaces, setting up cgroups, setting up network interfaces and firewall rules). But the Docker windows container stuff is all in the main Docker repo. It requires proprietary software to use so I'm not really interested.



Hm, ok, but that's "just" syscalls, right? Anyway, that would REALLY be cool. Hope they make it.


I would like to be able to anwser you, but I want to ask ... Can I use LXC/LXD ?


I expect this shiny new feature was introduced for exactly this reason - serve as a backed for a docker, both local development and cloud services. So - most likely support is coming


This single thing will make or break this feature for me. If I can use Docker in Windows 10, I do not have much use left for my Ubuntu VM.


Man i do think this is a big step for windows , it's 2016 and still complex to pull a du -sh or df on windows. Things we take for granted on *nixes. Much love.


According to http://stackoverflow.com/a/868290/259130

    function directory-summary($dir=".") { 
      get-childitem $dir | 
        % { $f = $_ ; 
            get-childitem -r $_.FullName | 
               measure-object -property length -sum | 
                 select @{Name="Name";Expression={$f}},Sum}
    }
I could get a shorter non exact version if I was on windows.


that's almost as easy as just typing "du -sh"


Add that function (or whatever combo of attributes you want to see on a regular basis) to your PowerShell profile, as well as this line:

New-Alias -Name "du" -Value "Directory-Summary"


For 'du':

    > Get-ChildItem C:\windows | Measure-Object -property length -sum


    Count    : 28
    Average  :
    Sum      : 6407920
    Maximum  :
    Minimum  :
    Property : length
For 'df':

    > get-WmiObject win32_logicaldisk

    DeviceID     : C:
    DriveType    : 3
    ProviderName :
    FreeSpace    : 21796540416
    Size         : 31685865472
    VolumeName   :

    DeviceID     : D:
    DriveType    : 5
    ProviderName :
    FreeSpace    :
    Size         :
    VolumeName   :

    DeviceID     : Z:
    DriveType    : 4
    ProviderName : \\veertu\share
    FreeSpace    : 44674220032
    Size         : 399999991808
    VolumeName   : VeertuFS


And you just built du with a single line of PowerShell. Also, you demonstrated the full syntax. The short form would be (leveraging aliases, positional parameters and unambiguous short parameter names):

For du:

    ls C:\Windows\ | measure length -sum
For df:

    gwmi win32_logicaldisk


Microsoft provides a du implementation in sysinternals stuff, but you have to install it and I think even accept a licence in a graphical window the first time you launch it...

They have neglected the CLI for years and powershell while I guess it has its uses for scripting is light years behind *nix stuff as an interactive shell.


>but you have to install it and I think even accept a licence in a graphical window

You can -accepteula from the command line and the "installation" is downloading and extracting a zip file. If there's an internet connection, you could even run it straight off of Micrsoft's servers via SMB.


Or you can just download du.exe from SysInternals to get a du-like tool.


GNU / Windows That is just GNU running on the Windows kernel. And not the Linux kernel running in windows!


No. This is a "Windows subsystem" [1] that implements a LINUX compatible ABI for LINUX application binaries. GNU has nothing to do with it.

[1] https://en.wikipedia.org/wiki/Architecture_of_Windows_NT


How come?

It's heavily marketed as "bash on Windows", and that "bash" is a GNU Bourne Again Shell, a part of GNU Operating System, developed under GNU Project.


This is not about Bash or any GNU software per se. Bash is just an example of a Linux executable that can be run on this system. One can apt-get install many more Ubuntu application binaries.

Please invest some time to understand what it's about technically.


I do understand that technically speaking, this is an implementation of Linux-compatible APIs/ABIs on Windows, so an ELF binary targetting POSIX-compatible environment could be ran on Windows. No dependencies on GNU OS parts here, of course.

However, please notice that it's also marketed as - quoting the article - "the ability to run native Bash and GNU/Linux command-line tools [on Windows]" and currently implemented as GNU-based OS (Ubuntu) running on Windows. So - in practice - essentially, it's MS-supported (although hosted by Canonical) GNU on Windows.


> So - in practice - essentially, it's MS-supported (although hosted by Canonical) GNU on Windows.

I don't care how it's marketed.

Let me remind you that there have been numerous ports of GNU tools for the Windows operating system in the past. This does not allow you to run any more GNU tools on Windows than you previously had.

Therefore, essentially, this is not about "GNU on Windows". This is about running "Ubuntu Linux software on Windows" including, of course, and in addition to numerous other tools, the GNU tools.

Also, the original statement was: "That is just GNU running on the Windows kernel." This is obviously not just that.


I honestly don't really see the point in this.

If you like Ubuntu/Linux more, then just install Ubuntu/Linux on the computer without Windows. Why go through the additional layer of Windows?

Perhaps the use case is limited to people who need to run Windows/Mac-only software like AutoCAD or some Adobe software.


Because many people like Windows and develop on/for it, but also develop server-side code which generally requires Linux... so we don't have to keep virtualising OSs or running multiple machines - we can now develop for both within a single desktop.

This is exactly the type of thing that many people want.


What I wanted is to run Windows in a Virtual Machine on my Linux box. Can I do that? Not without paying Microsoft $150. You can get their trial to run for 90 days, but it sucks having it expire and shut off after a few mins.


Windows isn't free software. You can't expect to run it for free just because it's in a VM. Honestly being able to run it for 90 days for free (after which all you have to do is go back to an earlier snapshot) seems pretty good to me.


Or you can purchase one of those cheap Win8/Win10 Pads for less than 100$ and connect them via vnc to your Linux box. Works pretty well for me with an 8" Dell Venue.


I like the Unix command line, but I don't really like Desktop Linux anymore (after having it used for >10 years), that's why I use OS X at home. At work I'm required to use Windows because I develop Windows software, so it's actually quite exciting that I get to use my most useful Unix utilities on my work computer as well.


> I don't really like Desktop Linux anymore

I used KDE 3 and Gnome 2 for many years (Windows also). and switched to LXDE/OpenBox after KDE 4 and Gnome 3 turned out to be unusable.

Although my current desktop is very simple it has become one of my best desktops ever because it can be configured to the extreme. It is very suitable for developers who want a clean workspace which doesn't get in their way like all the other modern desktops (Win 8+ also) which focus more on eye candy than usability.


May I ask you why you don't like it? As a new-ish Linux convert, I'd like to see your point of view as someone who used it for a while.


Here is what I consider to be the perfect example of where this fits into the development world. From my employer, I'm given a laptop, running Windows 8.1. All of my work for the past 2 years has been in developing a .NET line-of-business application, but I was recently asked to prototype an application to automate and streamline a collection of Excel spreadsheets. (Don't laugh; I've kind of made a career out of this sort of thing.)

For this project, I could choose the programming environment. Well, my tool of choice, for the past 10 years, has been Rails. I prototyped the application on my MBP within 10 hours, but needed to be able to work on the site with my corporate laptop, so I setup a Linux VM, and setup 2 networks (one host-only to share files, and one NAT'd to get through the corporate firewall), configured the folder shares, and got everything going the "VirtualBox" way. It's a pain.

Like a lot of Fortune 500's, the IT resources are locked down to the point of being combative. There's no way I can take my MBP to the office and work on the company LAN. (You can get an exception, but you have to ask for special permission every single day.) This project SUPPOSES to allow me to open a shell, install RVM, install Ruby and git, download my sources, bundle install, and do `rails -s` to do development. (The application is just using sqlite for the database, so I don't even need to mess with MySQL for now.) If everything works like it should, I can just use Sublime Text 3 as my editor to work on the files, and host on the Linux subsystem. When it's time, I can `git push` my sources to the TFS server, and deploy them on the Linux VM running in Azure.

So that was kind of long, but this is how I see it working and fitting into my world. Against my better judgement, I'm letting my gaming rig (re)upgrade to 10 to try this out. I don't know when my company's IT department will standardize on Windows 10; we just got 8 last year. I just want to see if this can really fill the niche it supposes.


See https://news.ycombinator.com/item?id=11416884 for some things that might be a problem.


That's an really important point.


Well... nuts. On the other hand... I'm positive that my company's IT policies were -- like a lot of other massive manufacturing companies -- written by proxy via an even-more-massive consulting agency back when SOX happened. I can't help but I wonder if this will escape their notice. But, yeah, if Norton anti-virus heuristics get excited... Crap.


> Perhaps the use case is limited to people who need to run Windows/Mac-only software like AutoCAD or some Adobe software.

Or many, many games.


That's where I'm sitting. Sometimes I want to jump between gaming and working on personal development projects. Even with an SSD, rebooting anytime I want to do that is just a pain.


If you run on desktop hardware you can use VM with VFIO GPU passthrough. It's much less pain in ass than reboot or work on frankenstein OS.


> It's much less pain in ass than reboot or work on frankenstein OS.

No, it's an incredible pain in the ass to get this working properly, when I could spend 20 seconds rebooting


Can't agree with you. Adding one option into grub config for Xeon CPUs and one optional (you don't need ACS patch if one PCI / PCIe port used) kernel patch for Intel consumer CPUs is clearly not "pain in ass".

It's might be hard to setup few years ago, but now it's super easy. With AMD hardware everything just work out-of-box.


Because I have to use Linux for work even though I hate it and half the drivers for my laptop aren't working.


There are a lot of companies that run large numbers of tools only available on Windows, often including expensive in-house tools.

Having people in the company run different OSes cost a lot of money because of the duplicate tools that has to be adapted or created for different OSes.


As a user of both worlds I can think of Better touchpad drivers, saner high DPI support, font rendering preference, simple composite window manager that supports easy snapping.


Because Microsoft wants to lock in developers to their expensive OS so they can make billions more, obviously. It's because programmers are expected to not be the smartest bunch and just be shoveled around with a bit of money and marketing expenditure. You know we took all the free stuff and made it proprietary, come serve are corporate goals. Meanwhile they have been allegedly screwing over Linux for years with backend deals with companies like Foxconn.

http://ubuntuforums.org/showthread.php?t=869249


I do not get the hate. This is superb


Many (including me) feel that this is just the start of a new EEE cycle by a panicked Microsoft, and will be killed off by Microsoft once they managed to reverse their current downward trend – just like other supposedly community-/interoperability friendly projects before, e.g. this project's direct predecessor SFU.


Googled for "Microsoft SFU" and found a bunch of Office 365 links before finding the wikipedia page:

https://en.wikipedia.org/wiki/Windows_Services_for_UNIX


I feel like this seems like a way more developed strategy for keeping developers engaged... dotnet going open source and multiplatform is a pretty big gesture for just a downward trend thing.


And the problem with that is what ? You are afraid that you might like it. You migrate and they will kill the project ?

If yes then...well...same can happen to any kind of software / project


No, that's not what embrace-extend-extinguish is about. The worry about EEE is that they establish dominance through vertical integration, introduce incompatibilities through both incompetence (bugs) and malicious behaviour (features), which will weaken and destroy the free standard implementations.

I'm not worried though. This is a neat hack, and may be useful for some people who for whatever personal reason won't switch to Linux proper, but it will not gain anything like the dominance required to push through incompatibilities. Unix applications already deal with a heterogenous environment, to say the least, and Winux will just be one more participant; not a particularly important one at that.


*nix servers now handle 99% of the web. Microsoft isn't going to push through breaking standards.


> You migrate and they will kill the project ?

Everyone migrates, draining valuable developer resources from projects like cygwin, mingw and colinux and others. When Microsoft kills it, they will have killed off not only their project, but also the community projects that could replace it.

Meanwhile, because the NT kernel has vastly different performance characteristics (e.g., bad forking performance), we're going to see an increase of "Linux/NT" optimized software that will perform poorly on native Linux kernels, pressuring more developers (and ops) to buy into Microsoft's effectively proprietary solution, completing a vendor lock-in that will continue to bleed the industry long after it stopped being beneficial (see also, every other instance of vendor lock-in ever).


Installed it to give it a go. It's impossible to install java on it. This is makes it fairly useless for my purposes. sigh


What goes wrong when you try to install Java?


if you go the apt-get route you get a sha256sum mismatch on both java7 & java8. If you try to be clever and manually download it and throw it in the cache, same story. If you try to be really really clever and manually download it and try to manually extract the tar it throws a bunch of cannot create symlink: invalid argument errors. I spent a good 2+ hours trying to force it to install in various ways. For now at least it seems java is not meant to be on Windows Bash.


For those interested in this, here's a more detailed report from someone named Joachim Moeyens.

* https://community.lsst.org/t/lsst-stack-on-ubuntu-linux-on-w...

Be aware that the Java8 installer/uninstaller has other potential symbolic link problems (not "invalid argument", though) that exist on actual Ubuntu Linux.

* http://askubuntu.com/questions/608961/

* http://askubuntu.com/questions/653885/


Does anybody know if this interface is Linux kernel functions + whatever POSIX is required to run Ubuntu stuff? I haven't seen that addressed, which strikes me as strange because it could have some pretty serious implications. Am I worrying over nothing, or could this make POSIX irrelevant pretty quickly as the new portability standard becomes the Linux ABI. I've cheered on Microsoft's recent moves in open source, but if they wanted to deal a serious blow - rendering POSIX irrelevant would be pretty devastating.


It's Linux syscall emulation. As for the death of POSIX, many unixes have had the same (even superior) functionality for years. POSIX wasn't dead yesterday. It isn't dead today


Thanks for the clarification. I wish I could be as unconcerned, but I remember what IE did to web developers.


Yeah, we got Firefox out of it. ;)


Which bash version is it? Is MSFT actually shipping GPLv3 without killing their entire company? Could it be that GPLv3 isn't a death blow to business? Whatever happened to cancer?


They don't. Canonical ships the GPLv3 software.


What about character sets? Do I still need to 'chcp 65001' from the DOS prompt to type/cat utf-8 encoded text files before running bash?


Why is it promoted with Ubuntu since it's basically - as put here https://news.ycombinator.com/item?id=11446420 - the implementation of the `Linux kernel system call interfaces' ?


It's not on Ubuntu on Windows, right? It is Ubuntu bash on Windows via a compatibility layer.


A graphical layer isn't present, but you can apt-get install anything that runs in the background/command line. vim, emacs, etc. At least as far as I know.

A commenter here mentioned having difficulties installing java, however:

> if you go the apt-get route you get a sha256sum mismatch on both java7 & java8. If you try to be clever and manually download it and throw it in the cache, same story. If you try to be really really clever and manually download it and try to manually extract the tar it throws a bunch of cannot create symlink: invalid argument errors. I spent a good 2+ hours trying to force it to install in various ways. For now at least it seems java is not meant to be on Windows Bash.

https://news.ycombinator.com/item?id=11446913


This is awesome! Can't wait to get my hands on it. If this works well, it is like a dream come true. I never wanted to abandon Windows because of a lot of music software that I am using. Now I will have the best of both worlds. Neat.


Just switched the the 'fast' ring and have installed all updates, but can't see the new 'Windows Services for Linux' item in 'Features'. Anyone know how to fix it?


This is great for cross-platform development Linux will benefit from this.


I've been using Docker for my dev environment (Python, Django, Postgres, etc). I expose a folder with my code to the Docker container so I can keep editing the code on Windows using Sublime. One thing that has been annoying me is the fact that I can't get Python code completion on sublime because Python and the packages are in the container. Does anyone know if it's possible to point Sublime to the Linux subsystem and get code completion? Also, has anyone tried installing Tensorflow yet?


from previous posts it looks like the linux image is just a folder in your AppData somewhere, so it should be trivial.


They sure seem to deliver. Unfortunately I am not a windows insider. I will probably wait until the anniversary update.

I guess this bash on ubuntu on windows won't be available for Windows 7?


I think it was specifically announced as Windows 10 only.


https://insider.windows.com/ — Becoming an insider is just a registration away, takes no longer than 5 minutes.

Use preview builds at your own risk, though.


I put this on my machine last night, and quite enjoyed playing around with it. apt-get, python and everything I tried worked. Even vim works great, as long as you don't mind 16 colors. The one thing I could not figure out was getting 256 colors out of Command Prompt.

Has anyone come up with a solution for that yet? I wonder if you can install something like xterm, and get that running outside of Command Prompt...


I am still on Windows 8.1, so if anyone that tries it can confirm if this works well with ConEmu and if Vim works well? Also, what the performance is like compared to running stuff on full stack linux. Also, does one have access to full hardware, like GPUs? That would be a good start. On Windows, my tool of choice was/is Babun... but damn 32-bit cygwin and it tends to get real slow (git especially so).


It works fairly nice with ConEmu and other terminals. The performance is also quite good: I installed clang and built a big-ish C++ project and it compiled faster than using MSVC on Windows directly (10 vs 12 mins, roughly).


Sorry for off-topic, but have a legal question regarding Windows Insider. Is it legal to install Insider build without activation and keep it running if it stay in fast updates ring?

Currently updates postpone temporary license expiration, but I can't find an answer how licensing work actually. I only run Windows in VM and I don't want to mess my 8.1 system with genuine license.


Unfortunately it seems that it won't install if you are running as a domain user: https://pbs.twimg.com/media/CfZhLruXEAEp56x.jpg:large

It works if I try as a local user on the same machine. Also Windows Store otherwise works fine for that domain user.


I have it installed, and I don't know how you are supposed to set up bash profiles with this folder structure, or for instance if I need to move something to my /bin/ folder to set up commands. I'm sure there is a way, but it's not quite like base Ubuntu since it's using the Windows folder structure and permissions.


- Does ls -li (show/emulate inode number ? I don't know whether NTFS has inode number or not)

- Find with exec , xargs is supported?


    1125899906857921 -rwxrwxrwx 1 root root 21 Apr  7 07:56 test2.txt
    562949953436608 -rwxrwxrwx 1 root root 13 Apr  7 07:56 test.txt
Two files created seconds apart. Some sort of internal NTFS construct, maybe?


Thanks for the output, it looks interesting. If these entries doesn't change with next ls -li then yes,its the NTFS inode number in readable format.


I really want to try this on a VM in OSX, just so it could be "Bash on Ubuntu on Windows on VirtualBox on OSX"


I am hoping there is going to be CentOS/RedHat available like this too. It would be pretty awesome.


I'm sure you could just copy all the files from an existing system, then delete the ubuntu files.


Sad that the VM (1) are no update to the #14316 version.

(1) https://developer.microsoft.com/en-us/microsoft-edge/tools/v...


I don't think 14316 is sufficiently tested for that use case. Those VMs will probably stick to the stable builds


Finally a reason to upgrade my office PC to Windows 10 (I run linux on my personal machines)...


There's a lot of comments related to the legality of this and whether or not it violates either the GPL or at least the Linux trademark. Even if it wasn't technically legal, I don't think the right parties have anything to gain by suing.


So, if I understand correctly, one can now run Docker containers "natively" on Windows?


This is not the case. Depending on your definition of native you could run Docker containers on Windows even before this.

https://msdn.microsoft.com/en-us/virtualization/windowsconta...


No, that doesn't suit my definition of native. ;)

Too bad, that would really be a game changer for me. Running bash itself though... yeah, ok, whatever. But maybe I'm not the target audience.


> "No, that doesn't suit my definition of native. ;)"

Why is that? Because the containers are running in Hyper-V? From a user standpoint I doubt you'd notice any difference, especially once Hyper-V is supported in Windows 10:

https://blogs.technet.microsoft.com/windowsserver/2016/04/04...


I would imagine resource consumption being much worse on Hyper-V because such containers are basicaly VMs. Am I mistaken?


If you need fast storage resources you may notice a performance hit. Performance for CPU and memory resources seems to be mostly the same as Docker on Linux.

https://caleblloyd.com/hardware/docker-performance-bare-meta...


I switched to fast Ring on my two PCs yesterday. I don't get the update? Any ideas why?


It takes a bit for you to change rings. Give it a day or two.


Can anyone who's tried this out comment on the terminal?

Does it have all the same issues as gitbash/cygwin/mingw/winpty (garbling, bad resizing, etc) or is there finally a decent local terminal on Windows?


Windows 10 console (not just for Bash, but cmd/powershell too) added a bunch of options like normal line wrap, highlighting and copy/paste that do improve a lot. The annoying thing is that these options aren't the default (for compatibility I guess?) and your settings in one don't seem to carry over to other console windows, I feel like I've set them like six times now.


Will this be enabled by default ?

The ability to do 'curl some-site.com | bash' or ssh <hostname> 'curl some-site.com | bash' without having to worry about platform compatibility would be amazing.


I doubt it, and part of me hopes not. This seems like a new possible vector for malware.

Microsoft went to great lengths to disallow these kind of one-liners in PowerShell (you need to explicitly add -executionpolicy bypass to even run a script), so apparently they are concerned about users executing random scripts off the net.

OTOH, yes, being able to assume bash on a user's installation would be amazing of course...


> Microsoft went to great lengths to disallow these kind of one-liners in PowerShell (you need to explicitly add -executionpolicy bypass to even run a script), so apparently they are concerned about users executing random scripts off the net.

Yes and I fucking hate it, way to cut of useful functionality at the knees.


A lot of my PowerShell scripts are accompanied by shortcuts or batch files that do:

    powershell.exe -ExecutionPolicy RemoteSigned -File whatever.ps1


Why would you ever want to do 'curl please-pwn-my-computer-so-hard | bash'?


Same reason you want to `brew please-pwn-my-computer-so-hard` or `apt-get please-pwn-my-computer-so-hard` or `please-pwn-my-computer-so-hard.exe`.

There's little practical distinction between piping a shell script from a random site or downloading a binary from it.


> or `apt-get please-pwn-my-computer-so-hard`

Installing a package manually vetted by distribution maintainers, signed and verified with GPG, is the same as blindly running a random script off the internet?

I don't think you appreciate how much effort Linux distributions invested into creating safe ways of distributing software.


The difference being that distro packages are distributed in a safe matter (signing and verification) whereas a website could be hijacked and the script replaced or you could get MITM'd if you access it over HTTP.


http://brew.sh/

`/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/in...

https://rvm.io/rvm/install

`\curl -sSL https://get.rvm.io | bash`

https://www.npmjs.com/package/nodejs-oneline/tutorial

`curl -sL https://deb.nodesource.com/setup | sudo -E bash -z`

It is a trend. I can find more examples, the above is just places I remember seeing this behaviour.


Of course these fucking devops #yolo #hashtag hipsters have no clue of how to safely distribute software.



How much work is it to try this, starting from a normal Windows 10 install?


A fair bit. You need to become a windows insider, if you aren't one already, opt in to insider builds (which can take a couple of days) and then install the insider preview.

Not worth it if you're just wanting to try it out, but if you want to test it and feed back, then knock yourself out!


Yes, easier to just wait some time until the "Windows 10 Anniversary Update" goes public for everyone.

Will surely be installed automatically like the November Update.


I have it on good authority that W10 Anniversary Update will go live for the general public before the end of July if all goes to plan.

I think you will still then have to enable dev mode and install it as a feature as per the OP link.


The Anniversary Update will be an automatic probably, yes (don't see this as a bad thing), but I am a bit skeptical that this feature will be included in the general update. Probably opt-in/separate download. It's a developer feature, after all.


Can I run chron and schedule jobs?

I have Win 7, so I was thinking of running a VM with Win 10 to try it out (once it is officially released).

It sounds like it runs X/Windows which is fantastic!


Sometimes I feel like Microsoft is spying on me. I've used Windows since I've used computers, a few months ago I 'upgraded' to 10. Sure it was slower and unstable but I figured I'd give Microsoft some time to fix it and struggled on. Then one day I come home and my lock screen is a ad. Right then I downloaded Debian, made a usb drive and said goodbye to Microsoft on my personal machines forever. I'll never look back. A week or two later they announce this. Sorry Microsoft, after you get a taste of the power, customization and flexibility of Linux you never go back.


Can anyone say anything about the stability of build #14316?


Coming from a previous build which was pretty good to do my daily job, I have high expectations from this one.


Why is it called Bash on Ubuntu on Windows ? What benefits does this bring in comparison with running a Ubuntu VM for example ? Sounds interesting although.


The developers are interested in serious discussion of a better name.

* https://news.ycombinator.com/item?id=11391931


Just went to Amazon looking for a Windows notebook


Everything I try to apt-get is giving me the error "Could not resolve 'archive.ubuntu.com'"


Can you do that on Windows 7? I could use this on work but our company hasn't updated to Windows 10 yet.


No, it's windows 10 only, and still in beta at that.


I'm not seeing this windows feature in the options.. is it not available on Windows 10 Home?


Right now it's only on the insider builds, not everyone. Unless you are getting beta/alpha builds, you're not going to see it for a wbile.


Doesn't work for me :(

The initial "bash" command freezes and doesn't download anything.


No Windows 7 support, and I guess I will stick with Windows 7 in the near future.


"bash on Ubuntu on Windows" am I the only one who finds the name weird?


As I mentioned elsewhere on this page, the developers are interested in serious discussion of a better name.

* https://news.ycombinator.com/item?id=11391931


Microsoft has always been great at naming things!

https://www.youtube.com/watch?v=EUXnJraKM3k


It is super weird. I think Ubuntu shell for Windows is much, much better.


"Bash emulation in Ubuntu Layer on Windows" would probably be the accurate version :-)


Does this make installing Python binary packages (such as numpy) less hellish?


I'm not sure but try Anaconda. 'conda install numpy' works for me, while pip throws an exception.


And windows now more vulnerable. #bashViruses.

From: Alien

To: SomeWindowsBashUser

Attachment: naked.jpg

naked.jpg

------------------------

#!/bin/bash

rm -rf /

------------------------


Do symlinks, mmap and epoll work?


could this be the thin edge of a very large wedge?


Look at me, im the Linux now.

-MSFT


[dead]



I guess you prefer experiment, excite, extinguish, by Google.


Which Microsoft loves to do just as much – XNA, Silverlight, TxF, …


Maybe, personally I look at some of those as transient technologies, MS usually provides a migration path, as painful that might be, to better solutions.


In all these cases the official migration path was "good luck ¯\_(ツ)_/¯"


Now we just need someone to make a nice laptop that can compete with the MBP...


I love the chromebook pixel with a debian chroot (aka crouton). I use the 2015 pixel with a 2560x1700 screen, i7, 16GB ram, and I get 12 hours battery life.

The chromebook pixel has been my only laptop for three years, and it's been rock solid. I never spend time troubleshooting. Setup upon upgrading from the 2013 to the 2015 pixel took about two minutes.

At this point, I'd prefer to use a chromebook rather than loading ubuntu on a laptop, because the stability of the pixel has saved me so much time.


Asus Zenbook UX305 series - they are well priced and run both Linux and Windows fantastically.


But battery life is only 6 hours[0]. For the new macbook it is 10 hours[1], the same for the macbook air 13'[2].

I'm aware not all reviews is equal, so pointing out errors or adding better resources would be appreciated.

[0]: http://www.techradar.com/reviews/pc-mac/laptops-portable-pcs...

[1]: http://www.trustedreviews.com/12-inch-macbook-2015-review-ba...

[2]: http://www.notebookcheck.net/Apple-MacBook-Air-13-2015-Noteb...


I get 9-10 hours on Linux with the UX305FA model (pre-Skylake fanless CoreM, i.e. MBA competitor). Not sure about the UX305LA model (i5/i7, i.e. MBP competitor).


They already did: surface book.


I checked out the Surface Book briefly, and boy was it a massive disappointment. The screen is much more heavier than the keyboard so it's unbalanced, detaching and reattaching the screen is extremely awkward, and there were touch issues with the stylus that the salesperson could not resolve.

I'd rather hold out for the next iteration.


Hei, you can install Windows on MBP :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: