I ran Linux natively as my sole workstation OS for nearly 10 years, and spent a lot of that time tinkering with WINE, including developing and submitting some patches, but eventually I had to give up because advanced things like Photoshop were too spotty in Wine and too slow in VMs.
My solution was ultimately to set up an Arch-based KVM hypervisor with a Windows 10 VM running as the main "workstation", with USB + GPU PCI passthrough and paravirt. The hypervisor also runs Linux VMs, from which I do development work via VNC and/or SSH.
This is the most convenient workflow situation for me, and allows the best of both worlds. It essentially makes Windows act like a desktop environment for a Linux box while maintaining practically-native overall performance for all workloads, including gaming and photo/video editing. It also grants the admin convenience of virtualized environments, since I can use zvols to snapshot everything at once, place clean resource limitations on each environment, etc.
It would only not work for Linux-based graphics development, but even then, you can get a second GPU and pass it through to another VM, running on a separate display.
Before I got the hypervisor set up, I ran Windows on the hardware with Linux VMs hosted in VirtualBox. The biggest issue with this (aside from the general shame and guilt of using Windows on the hardware) was that Windows would decide it wanted to turn off for MS-enforced updates and bring everything down. Now, Windows is separate and it can crash, reboot, or hurt itself all it wants, and rarely causes any real loss.
Basically you need relativly new hardware. IOMMU is a must. (for intel)
Here is a guide from some other user: https://github.com/saveriomiroddi/vga-passthrough
was really helpful but I needed to give up since my mainboard was too stupid and my graphic card didn't worked as I wanted. I could install the OS but it bootlooped after installing any driver.
I used an asus b350-m mainboard with a AMD Ryzen 5 1500x + a radeon r9 280 (thaiti based basically the same as an hd7970)
and it didn't worked since my mainboard only had one pci express 16x which meant that the "better" card needed to be used for the main os. You can use the first slot card for the os but it might not always work (hardly hardware dependant). Having a good mainboard is way more important for it to work.
well you just need an efi firmware and I guess ovmf is the only one that the guy knew of (I actually do not now of any other aswell). if you boot qemu without any firmware it would emulate a real mode device which probably make it impossible to access the device correctly. (not verified or tested, since my rig never worked).
but you can install OVMF on any new linux distribution. and you probably don't need to patch it (at least on ubuntu 17.10, fedora 27+, arch linux...)
> OVMF is an open-source UEFI firmware for QEMU virtual machines. While it's possible to use SeaBIOS to get similar results to an actual PCI passthough, the setup process is different and it is generally preferable to use the EFI method if your hardware supports it.
There are some guides, but it's very hardware-dependent and touch and go. It took me a couple of weeks to get all of the bugs worked out and things running reasonably smoothly. I would not recommend it for the faint of heart, or those without significant sysadmin experience.
On consumer hardware, it can be hard to find out if you even have IOMMU support, required for passthrough. It's not necessarily new but a lot of hardware doesn't support it. Unfortunately for me, my i7-3770k did not have it (but the i7-3770 non-K did). I did the hypervisor build on a new enterprise-class workstation with a Supermicro motherboard.
I use libvirt for the config. I started to build a custom kernel to enable some extra features, since I had to compile dev branches anyway to troubleshoot periodic hard locks on kernels 4.12 and 4.13, but the setup should work with a stock kernel.
For me, the biggest hangups on the checklist were:
a) Ensure 100% UEFI everywhere. It's possible to do this with BIOS but as far as I understand, not well tested anymore. It can also be hard because of the way boards are sort of straddling a middle ground between UEFI and BIOS; if you don't set everything to explicit UEFI in the setup, it may init the system with either, or may init the BIOS first for hardware compat, which will make things weird.
If your video card is slightly older and from the time when UEFI was just getting supported by PC mobos and does not have UEFI boot, you may be able to find a UEFI-compatible video BIOS online. I had to do this for my GTX 670 (I use a 1070 most of the time though, and it did not need this).
b) Kernel VGA options, particularly video=efifb:off. In theory you don't need this but it depends on your board, hardware, etc. Ensures that the video device is available for vfio-pci to grab and block out other drivers that may try to grab it later. The downside is no video out after the bootloader so you can't watch the boot process. I use a USB serial port to watch the boot now, but for a long time I just waited until SSH came up, and if it didn't come up, used the systemd emergency console to try to poke it blind. This made things far harder than necessary. Get serial output ASAP, or use a board with a built-in IPMI. Didn't realize it was possible to order the workstation I got without it, or I would've made triple-sure I had it. Would've made things way less aggravating.
Along the same lines, having multiple GPUs helps a lot with this, especially if you can configure your system and/or move things around in their slots so that the one you don't want to passthrough gets initialized first. Then you don't have to worry as much about competing with the system EFI, bootloader, or kernel for raw control over the device. Not strictly necessary, but useful.
c) nvidia Code 43 on passthrough. This is nvidia trying to extort some sort of ridiculous data center license out of you or something. There are various potential fixes that are somewhat easy to find floating around, particularly QEMU flags, but this is another one that you just have to poke around until you find a combination of options that work. For recent drivers, one of these things is setting the initial substring of your mobo name to a recognized consumer vendor in QEMU/libvirt.
d) For good sound in the Windows VM, need to hack the driver ini to use MSI for interrupts, which requires running in Windows "Test Mode", aka unsigned driver mode. This breaks some anti-cheat software and I returned PUBG because it didn't allow the game to start while Windows is in Test Mode. You also have to set / tweak specific custom args on QEMU and the host's PulseAudio to get the timing right. Without this, the audio drifts very noticeably out of sync. Alternately, passthrough your sound card or use HDMI audio.
Obviously there were a bunch of other little hiccups but this is what stands out to me off the top of my head. Best resources are Arch Linux wiki on passthrough, Proxmox forums, and /r/vfio.
All this said, the biggest source of pain was not related to IOMMU or virt at all, but rather LVM2-based thin provisioning and device mirroring before I switched as much as I could to ZFS (still working on the last few pieces). ZFS is somewhat stricter but it works reliably. LVM would frequently make boot hang, fail to reinitialize the volumes correctly, etc.
Happy to help anyone going down the same road. It's really a great setup once it's running and I'm sure there'd be more envy for it if I had the energy to do a big write up that showcased it to everyone. :P
> It would only not work for Linux-based graphics development, but even then, you can get a second GPU
My impression is that dGPU "hot" swapping between non-running guests has gotten easier. But that swapping to host after a guest is still a hardware/drivers/kernel "maybe it just works, or maybe you can't get there from here".
Since I read about this a few years ago I really want to try it, but I don't want to buy an extra GPU for it. I hope AMD [0] brings their SR-IOV implementation called MxGPU down to their mainstream GPUs, which allows to split a single GPU between host and guests. Apparently this would also be more secure than passthrough?
In order to not affect their pro GPU sales they could maybe limit the number of virtual GPUs from 16 to 2, which would be enough for the host and a single guest.
[0] or Nvidia, I don't care, but since Nvidia is the market leader they have less incentive than AMD.
On my list is a GPU passthrough setup, which is described in following blog post [1] (with screenshots).
I did not set up it yet, but I will try it out next time I build up my home desktop from ground up.
Have you tried figma.com? It’s web based and in many ways even better than Sketch app. I personally stopped using photoshop for graphic design long time ago and never look back
Figma is great, they have very interesting ways of solving problems, through rethinking from first principles of what makes a feature interesting or useful for the end user.
This looks fantastic, they are properly innovating and not just trying to be a web version of sketch. I just looked into how paths work (pen tool), and it is finally exactly how I originally envisioned they would work when I was first introduced to them decades ago. I never fully understood why the pen tool had a surprisingly high learning curve and general awkwardness. I assumed it was to meet some specific high-end needs of pro designers (I am an engineer first). Figma seems to be showing otherwise.
If it actually works without serious problems (you know, some times you can install something, make it run and take some screenshots without it actually being usable in real life) then this is a bomb! I am pretty sure there are people who are willing to switch to Linux but are stuck on Windows because of the need to use most recent Photoshop, Illustrator, Office and/or Visual Studio.
Wine's AppDB is a great resource for these kinds of questions, and a really easy way to contribute back to the community through posting your own experiences. Here's a link to the Illustrator CS6 page:
https://appdb.winehq.org/objectManager.php?sClass=version&iI...
According to that, Illustrator CS6 is working in Wine 2, but not extensively tested yet. If you have Illustrator and are willing to try it on Wine 3, you could do the community a huge favor and update this page.
# In addition, info seems to be culled readily, I went back to find info I'd posted so I could setup a game again and the info had been deleted (turns out they had emailed me to tell me that if I didn't actively maintain it then it would be removed - I no longer add data).
Wine has always seemed so brittle, needing cryptic incantations to put the Windows environment in to just the right state.
I've been a moderator on a couple of AppDB entries over the years. The quality and reliability of the entries depends entirely on the person/people moderating the app's page, as results could be rejected or approved at the moderator's discretion. Some would reject almost anything always demanding more details. Some would approve almost anything, even if it didn't contain any useful information at all. Really just hit and miss.
"Gold" and "Platinum" obviously mean different things to different people. If you go strictly, it's hard to call anything "Platinum" because almost everything has some differences in behavior v. native, even if they're only cosmetic. If a tester hasn't run into any, it's more likely that they've just done a superficial check of the program, said "Yep, it launched, looks like everything is here", and submitted "Platinum".
AppDB's moderation perspective and codebase was badly outdated when I was involved (several years ago now). I haven't checked on it recently so maybe they've redone things, but this is why you've seen inconsistent entries in the past.
As for the Windows environment, yes, different programs need different registry settings and other environment hacks. The main company that funds Wine development makes their money by selling software that manages these environments. There is also a free script called winetricks that will help install programs with the correct environment settings.
Despite semi-complex environment management requirements and the outdated AppDB code, Wine remains one of the most impressive pieces of software in production today. It's an extremely ambitious project and it's handled very professionally. It's a great all-around reference point for code quality, standards, and community. Alexandre Julliard is a deeply underappreciated BDFL and IMO his name belongs up there with other open-source luminaries.
In all my years of computers, few projects have impressed me as much as Wine. It still feels like it shouldn't be possible, and yet, it works wonderfully.
Bit of a difference porting well documented APIs with multiple open source implementations compared to a compete clean-room reverse engineering of a closed system, quirks and all.
> You do realize that if a Windows application is calling an API, it's well documented?
To the original developers of the Windows application, perhaps. Not to the rest of the world. Not unless that Windows application is open source.
Windows has years upon years, multiple layers, and multiple versions of APIs, with quite some buggy behaviour preserved for the sake of "backwards compatibility". Linux (the kernel) has a much smaller userspace API.
Wine has to reverse-engineer not only the Windows kernel, but also all the editions and versions and layers of the Windows userspace. WSL has to implement just the Linux userspace API.
> Also the WSL team is forbidden from looking at the source.
Are they also forbidden from reading the excellent man pages (of say, glibc) or the kernel documentation? That is as direct a source of information as they can get without having to parse source code. And the other BSD-licensed implementations of the Linux userland APIs are available in source code form. Quite different from reverse-engineering dlls and runtime behaviour.
It works on Linux, but Wine on macOS can't support DX11 games, because macOS has crippled outdated OpenGL. You should use Linux, where Wine can use OpenGL fully, since it needs many features of OpenGL up until 4.5 to support D3D11, and macOS is limited to 4.1 and lower only.
macOS doesn't support Vulkan either, so DX12 and Vulkan based Windows games won't work there in Wine as well (while they can work in Wine on Linux).
TL;DR: Linux is a much better option for gaming in Wine than macOS.
I finally fully switched over to Linux from Windows due to specific games beginning to work in Linux (was some years ago now). And I know several others who say they prefer Linux but don't want to switch because they play games that might not work on Linux.
Why would it be improbable that someone switch from Mac to Linux due to game support? Where Mac already has far worse support than windows and in some respects is more like Linux. Shouldn't the leap be lower from Mac? OK I give you that I am not taking into account the Mac user mindset.
Actually quite a lot of macOS refugees switch to Linux because of gaming precisely. Simply because Mac hardware is underpowered, and issues like the above with bad support in Wine because of crippled OpenGL and no Vulkan.
There are no Macintoshes with high end GPUs as far as I know. That already makes it underpowered in general.
And reasons for switching to Linux are from discussions with Wine users. If you are one, you should have encountered this topic already. But I assume you aren't using Wine for gaming, thus your question.
I love Wine, but I have some rather obscure apps that I'd like to run that just refuse to run in any version of it (mainline, Crossover, Winebottler, manually extracted copies of Cider from macOS-ported games...) As-is, I have to run these in a Windows VM to get them to work at all.
It's a lot of hassle, especially because the least-working Windows programs are also some of the tiniest little utilities where they display maybe one GUI screen, but also take command-line parameters. So I usually have to copy files for them to interact with into the VM, open cmd.exe to run the utility on the files, interact with the utility GUI, then copy the files back out. Wine would make this workflow a lot better!
What I'm wondering is: if I don't care about being a Wine "purist", and I have a legitimate copy of Windows laying around, is there any way to dissect that copy of Windows for its files (drivers, helper executables, etc.) and wrap them around a Windows EXE, such that Wine will be using as few Wine implementations of DLLs as possible, rather than just the minimum required to get the individual EXE vaguely working? Is there a way to start off a Wine install with "maximum compatibility" like this?
I really do wish there was some equivalent of Windows' WSL: a kernel-side ABI shim for Linux/macOS that can run a complete copy of Windows (minus the kernel) inside it, including services like GDI—which you could then use an RDP client to connect to—but where the entrypoint is running an individual EXE, and those services are only started when clients attempt to connect to/use them. If I can make Wine like that, that'd be a dream.
Alternatively, Linux+macOS support for running Windows Docker applications, plus support for containerizing graphical Windows applications, would largely obviate my need for Wine.
Funny enough, when I was younger and more naive, I attempted to write a Linux kernel implementation of Win32, starting with a PE binfmt and linker. Never got much of interest working because I had trouble dealing with syscalls.
But I realized over time that it wasn't actually a good idea. Linux and Windows have one hell of an impedance mismatch. Windows distributes the user-space and kernel-space dramatically differently from Linux. The Linux VFS layer is greatly different than the Windows filesystem layer, and related semantics. Windows likes UTF-16 APIs, Linux likes UTF-8. They do threading differently, Winsock is extremely broken, named pipes don't work like netlink sockets, Windows is a little bit country, Linux is a little bit Rock'n'roll...
I'm sure someone with more experience can list off more. Even the general ecosystems are entirely different. A lot of things on Windows are kind of 'defacto.' Like, wintab32.
At the end of the day, the hacks that Wine does to make things work are astoundingly small compared to the massive undertaking they have reimplementing Windows in a fashion that is useful. You could go in with the logic of trying to be 'as Windows as possible' and use as much of the Windows libraries as you can, but how are you going to deal with things at that level, when Windows puts GDI and RPC in the kernel and Linux has it as userspace daemons?
> What I'm wondering is: if I don't care about being a Wine "purist", and I have a legitimate copy of Windows laying around, is there any way to dissect that copy of Windows for its files (drivers, helper executables, etc.) and wrap them around a Windows EXE, such that Wine will be using as few Wine implementations of DLLs as possible, rather than just the minimum required to get the individual EXE vaguely working? Is there a way to start off a Wine install with "maximum compatibility" like this?
the majority of problematic DLLs are officially redistributable by MS. the vast majority of the remainder depend on undocumented kernel APIs, which are... also unimplemented in wine, and usually the reason the builtin wine DLL doesn't work properly. you'd have an easier time importing your legitimate windows drivers to Linux, through ndiswrapper!
> So I usually have to copy files for them to interact with into the VM, open cmd.exe to run the utility on the files, interact with the utility GUI, then copy the files back out.
Most VMs provide the ability to share folders between the host OS and guest OS.
you can certainly redirect api calls into native windows dlls. there are also debugging utilities to see where your applications are failing. how low level are the utilities you are using?
After migrating from macOS to Linux last spring, there is one application I still miss. It is available for Windows and macOS only.
I tried to get it work with Wine, but the installer complained about missing .Net 4.5.1+
So I installed it, but the installer still complained.
Then I read that Wine cannot emulate Windows 7 + .Net 4.5.1; so I switched to emulation mode to Windows Server 2003R2. Now, the installer complained about only supporting Windows 7 or newer.
So, does anybody know if Wine 3.0 supports Windows 7 + .Net 4.5.1?
(I know, I could do the research myself. Don't bother, unless you know off the tip of your head.)
EDIT: Almost forgot - congratulations to all the developers!!!
If it's a must have for you, you could try buying a copy of Crossover, which is a commercial version of Wine that employs several Wine developers.
Anyone who owns Crossover is entitled to support, and you may be able to then submit your specific program over to the devs to take a look and see if they can figure out why it's not working.*
Note: I say "might" and "may" because buying a Crossover license doesn't automatically entitle you to demand support for any obscure program you can think of. The developers will take a look at your logs and offer suggestions with no guarantee of a fix. But it is a way of supporting Wine development.
I was just marveling at how well Wine+Linux works with my old music and audio hardware. A lot of my old gear hasn't had functional Windows drivers since XP...but, Linux picks it right up. And, a few days ago I thought, "What the heck, let's see if the sysex editor/librarian for this old beast will work in Wine?" And, it did! It saw that the driver wasn't the official driver from the manufacturer (since it was the ALSA driver) when starting and it warned about it, but it still worked, and still detected the device and loaded up all the patches and such.
Historically for these abandoned devices, I'd just use MIDI for communicating with it; since MIDI hasn't changed in decades, it all continues to work just fine (in Windows and Linux). But, it's certainly convenient to just plugin the synth itself via USB and have it all Just Work.
But, yeah, to speak to your point: There are mountains of old software (and hardware!) that won't work on Windows anymore, but will run fine on Linux with Wine. That's pretty amazing to me. I couldn't believe how smooth it all was; I remember spelunking docs for hours to figure out how to make things work. Now, I just run the installer (or just run "wine app.exe") and there it is. I'm sure there are exceptions, but damned if I'm not still impressed.
I rarely consider any program important enough to reboot to use it, since I'm most comfortable and productive in Linux, and Wine becoming so good over the past few years makes it even less of an issue. I still keep a Windows partition on my laptop, Just In Case, but I haven't needed it in months.
I'm getting a new laptop soon to start tinkering a bit more with my keyboard and a synth, and I'm considering a Macbook for the first time in my life based on a recent HN article regarding latency.
Have you found that a linux machine has noticable latency when playing/recording music and applying filters?
Linux is not ideal for recording audio, but it has improved remarkably in recent years. It used to require a bunch of kernel patching to get low latency recording (though once you jumped through those hoops, it was lower latency than most Windows environments), but now most of those patches have been mainlined, and you just have to add your user to the realtime group on your system to allow JACK to use realtime scheduling.
I don't notice the latency, but I also haven't measured it, and I just do it as a hobby, so spending some of my time tinkering with quirky stuff is fine (though it's not really that quirky anymore...tons of stuff Just Works).
If it were my job, I would probably consider a mac. It's not; my job is software, where Linux is the clear winner. But, I don't think it's nearly as clear as it used to be that Linux isn't suitable for pro audio work. I think a reasonable person might give it a try, whereas five (or ten, or twenty, which is when I first started poking at audio on Linux) years ago it would have been madness to try to do real audio work on Linux.
I wouldn't think just the rt stuff would make a difference for normal desktop stuff since processes have to opt in to realtime scheduling (and the processes that don't can then be pushed aside to meet deadlines for the RT processes). But, maybe there are other scheduler changes present in the patch set.
I just use whatever RT features are shipping with current Fedora, and it works well enough for me. I don't miss patching a kernel every couple of weeks as was required back in the bad old days of doing audio on Linux. (Though CCRMA made it easier with their package repository, so it hasn't really required a completely DIY process for many years.)
>I wouldn't think just the rt stuff would make a difference for normal desktop stuff since processes have to opt in to realtime scheduling (and the processes that don't can then be pushed aside to meet deadlines for the RT processes).
The same sort of opt-in as in mainline. SCHED_FIFO or SCHED_RR. Yes, it makes a difference even without opting in.
I get spikes beyond a ms within minutes on mainline, on all my hardware.
In practice, without even setting SCHED_FIFO or SCHED_RR, I can play sfc(snes) games on higan on linux-rt. This is not possible on mainline, not even with SCHED_FIFO, as even with higher audio buffer sizes, it cuts constantly. Same with audio work with jackd and low latency buffer sizes.
>I just use whatever RT features are shipping with current Fedora, and it works well enough for me.
No such luck for me, and I've even tried ck's patchset and such. It barely provides any improvement. Yet linux-rt is radically better.
And, while unrelated, even the usual disk i/o stalls that plague Linux (on any hardware, using any common distro standard kernel) haven't happened at all to me on linux-rt. It's as if using an entirely different OS. It feels responsive.
>I don't miss patching a kernel every couple of weeks as was required back in the bad old days of doing audio on Linux. (Though CCRMA made it easier with their package repository, so it hasn't really required a completely DIY process for many years.)
Me neither, as AUR makes linux-rt very convenient.
If you get high latency on a Linux machine when doing audio stuff you probably have hardware problems or software is misconfigured. Years ago I moved everything I had under XP (Reaper+plugins) to a Wine directory, reinstalling those plugins that didn't like to be moved, and never looked back. I'm using the kxstudio repositories software (RT kernel, Wine etc). http://kxstudio.linuxaudio.org/
Latency is very low, even on old PCs such as Core Duo ones, and compatibility is very high.
The only problem I encountered with Reaper under Wine is that it needs a lot of time to buffer a very big audio track when it's being added to a project. Say I record my band on our portable recorder then I drag and drop the big sample (2-3 hours stereo 44.1khz x16bit stereo uncompressed .wav) on a Reaper track, it starts eating all PC's memory to buffer it hanging the machine for a a good minute. Once the track is added, the memory gets released and it works perfectly though, so it's not technically a bug and I have no idea if it depends on Wine or Reaper.
A few months ago I also started to use the native Linux Reaper port which is fantastic and lately got a lot more stable, but all my attempts to have Windows plugins working in parallel with it through Wine failed although they say it is possible, so although there are some interesting plugins out there, they're not remotely comparable to what is available for Windows (but that would be the case for every other native application): using Wine is still the best option if you are not just strictly recording and need external plugins. I also explored the possibility to have musical software working on a Windows VM with disconforting results so far: computing power is there, but latency is a nightmare. As of today, to me Wine still offers the best options if one needs both low latency and compatibility.
Very unlikely. Linux distributions are based upon Linux, and the whole architecture of the Windows Linux Subsystem, after all, is not having Linux there.
I had issues playing TES Oblivion in Windows 10, I got black screens , I tried all workarounds I found on Google, in the end I got it working and with mods in Wine.
Just trying to help. I'm not an English major or anything myself, but here is one issue I noticed.
Don't overuse commas. After you get a complete idea out there, just end and start a new sentence.
For example: "Please let me know how I can improve my phrasing and what mistakes I made. I am not a native speaker, so I am aware I can build my phrases in a weird way."
I'm curious how WINE accomplishes that, since my understanding was that the reason it can't be done on Windows x64 is because there is no way to switch the processor back to 16 bit mode from long mode. Is this incorrect or does WINE just go ahead and emulate 16 bit x86?
IIRC 16bit code on a 32bit version of windows runs in 32bit mode with some thunking for selectors. The issue is more that HANDLEs can have values up to half the sizeof(HANDLE) and supporting 16bit would have significantly harmed the value of 64bit code or required special consideration of where a HANDLE was getting allocated. Then you have to deal with a 64bit process creating a HANDLE and giving it to a 16bit process. Very quickly it turns into a corner case nightmare.
This is the correct answer. There's nothing stopping you from setting up a 16 bit protected mode descriptor in your GDT and using it. The issue was in thunking HANDLEs.
> Note that 64-bit Windows does not support running 16-bit Windows-based applications. The primary reason is that handles have 32 significant bits on 64-bit Windows. Therefore, handles cannot be truncated and passed to 16-bit applications without loss of data. Attempts to launch 16-bit applications fail with the following error: ERROR_BAD_EXE_FORMAT.
Do you have a source for that? running 16 bit code in 32bit mode doesn't sound like it would work. All the instructions would do 32bit operations instead of 16bit operations. So if a program did what it thought was a 16bit write to a memory location it would actually end up doing a 32bit write and would clobber the adjacent 16bit memory location.
I know nothing about the topic. I'll only point out that https://bugs.winehq.org/show_bug.cgi?id=36664 says ".. the reality in the past the 16 bit code was not a issue because the [Linux] kernel always supported it." and "...3.15 after 16 bit support by [Linux] kernel is optional on 64 bit systems."
>> Afaik 64-bit windows doesn't support 16-bit binaries, so I just
assumed Wine wouldn't do it either on x86-64. Not for any real technical reasons, though.
> Yes, there is still a significant number of users, and we still
regularly get bug reports about specific 16-bit apps. It would be really nice if we could continue to support them on x86-64, particularly since Microsoft doesn't ;-)
I would not be surprised to see MS introduce a dockerized '16bit compatibility' mode for Enterprise versions of windows. The only reason 32bit windows is still around is to support 16bit apps for these companies.
The HP Stream 7, a tablet released in late 2014 on the 64-bit Atom Z3735G, shipped with 32-bit Windows. This was presumably to avoid the memory overhead of 64-bit pointers, as it came with 1GB of RAM, so every bit counted.
Inferences from https://blogs.msdn.microsoft.com/oldnewthing/20081020-00/?p=... . I also know that 32bit programs could load 16bit modules without issue. In fact that was one of the things that MS was harping on people to not do; was stop using 16bit modules in their 32bit applications (which would have otherwise run fine on 64bit systems). I'm pretty sure it worked because 32bit system DLLS all got a 32 appended to them so the linker and loader would know which version they were going to and ensure that call was routed appropriately.
Edit:
It's also worth noting that 32bit and 16bit instructions are different because of register size operands.
So to speak you're absolutely right... a 16bit read/write COULD do that... but it's a feature, not a bug.
Whether instructions operate on 32b or 16b quantities is not dependent on mode the processor is in, but on combination of bits in code segment descriptor, in essence these bits can change on every far call. In native long mode descriptor tables have different format, but these bits are still there and have effect.
This is one issue, another issue is that the method used to implement 16-bit compatibility in 32b windows involves virtual 8086 mode and running instance of DOS inside of that even for native 16b windows applications.
In all, I assume that it was droped mostly because it involved too much work for somewhat questionable benefit.
If I recall correctly, you can still switch the processor to 16-bit mode but virtual 8086 mode is not available from x86-64. So more recent 16-bit Windows applications can be made to work but DOS applications and older 16-bit Windows applications that rely on more legacy DOS-style APIs don't work.
The "older 16-bit Windows applications" (actually a misnomer because there isn't this distinction in practice) actually used DPMI, which does not involve v8086 mode.
The actual machine firmware has been irrelevant for v8086 ever since the time of OS/2 2.0 and DOS-Windows 3.1. Both used virtualized machine firmwares to run in the v8086 virtual machines. OS/2's was dealt with by a virtual device driver called VBIOS.SYS, for example.
Apparently it's a Windows limitation, rather than hardware one, since Linux supports it on x86_64. It got broken at one point, but then Linux kernel developers enabled it back, with even Linus Torvalds himself inquiring about this issue.
If you stretch your definition to problematic but usabile, it’s certainly true for recent versions of Windows. Moreover, it’s likely been true for a few years!
> If you stretch your definition to problematic but usable,
One common example of a use case for wine is games, so maybe we can measure it as the number of games wine has it in its database over the years.
> it’s certainly true for recent versions of Windows.
Sounds very much likely given the fact that D3D 11 is now supported. D3D was an enabler or catalyst if you could call it like that, for the games and GPU industry. Who knows what’s in store with UWP
I know right??? Ive been using samsung dex as my primary computer for video editing and web design ive been craving a desktop version of some windows programs tho haha
I remember having to follow this guide when I needed
to flash some cRIOs using some garbage Win32 flasher
program with an Acer ICONIA A500 running a barely-functional
build of Arch... I hope they've managed to automate this step in the
Android APK build process in Wine 3.0.
I remember DX10 required an upgrade to Windows Vista at the time. Does the DX10/11 support here mean that we have a non-proprietary way of running those games? It'd be great not to have as much vendor lock-in for this kind of stuff, especially as games get older and less-supported.
Wine is performing dynamic translation of D3D calls into OpenGL (or into Vulkan for D3D12). So it doesn't need any proprietary code to do that. However Wine developers might need to reverse engineer the logic of MS libraries, to correctly implement same behavior through open APIs.
It surely helps to remove the lock-in that DirectX API is causing. Run Windows only games on Linux? For MS lock-in proponents it must be a nightmare.
As someone said it is a voluntary project but some companies make money from it like Transgaming (now Nvidia) which works by porting games to Mac OS X. So games like FFXIV for Mac are not native but actually a professional wine based wrapper.
And CodeWeavers, who were the first to reliably support Microsoft Office in the early 2000s. [1] They used to contribute a lot back to Wine (they probably still do). I think that Transgaming actually forked Wine before it became GPL-licensed. IIRC they were also not really contributing much back (at least in the Cedega times).
[1] This was a screenshot of my machine in 2004, running Microsoft Office under Wine under NetBSD's Linux emulation:
> Approximately half of Wine's source code is written by volunteers, with the remaining effort sponsored by commercial interests, especially CodeWeavers, which sells a supported version of Wine.
I used to work on wine full-time during my summers. The company I was working for was in cloud gaming, and linux was a better server environment. We used WINE to emulate the games. There are quite a few companies with a strong enough interest in WINE that they hire developers to work on it.
CodeWeavers takes corporate sponsorship of specific fixes that an application needs to be supported. First the MVP version ends up in their commercial product. Once the fixes required for a given app are cleaned up and refactored they roll these back into the opensource Wine project.
Check out an application called 'cmus'. Very nice minimal music player on Linux that runs from your terminal. It has support for lots of different formats.
On linux, just use the mingw c/c++ compiler and run/debug under wine. When it's time to run under windows, I compile under cygwin's mingw, and that usually just works.
I'll edit later today when I get a chance and post my makefiles / compiler opts to github.
If you need clarifications, just post a comment there.
As for faster, VS2017 takes minutes to load on my machine (which is fairly beefy BTW). Even back when VS was fast, I could still work faster with the old userland progs like vim, grep, make, etc...
Does anyone know if this will support running SQL Server Management Studio? That's the only thing keeping me using Linux in a VM instead of as my main.
I haven't seen any reports about it specifically, but anecdotally SQL Server 2008 R2 runs well on Wine 2.0 and later when you install .NET 3.5 into Wine, and Visual Studio 2010 Express works pretty good with the same caveats, so I think the SSMS from around that time should work okay with more modern Wine releases.
Give it a shot and provide feedback, if no one tries it then it'll never get the support or workarounds it might need!
IANAL but the Microsoft License Terms for OEM Windows 10 [1] says yes:
2. Installation and Use Rights.
b. Device. In this agreement, “device” means a hardware system (whether physical or virtual) with an internal storage device capable of running the software. A hardware partition or blade is considered to be a device.
d. Multi use scenarios.
(iv) Use in a virtualized environment. This license allows you to install only one instance of the software for use on one device, whether that device is physical or virtual. If you want to use the software on more than one virtual device, you must obtain a separate license for each instance.
This really tempts me to go back to Linux as my "daily driver".
My main issue is working from home, and connecting to my work's VPN. We use Pulse Secure, and it does a host scan that only works on Windows and Mac OS X.
Has anyone had any experience with getting Pulse Secure running under Wine and having it trick a corporate VPN host-checker that it is indeed a compliant version of Windows?
I have successfully used OpenConnect to replace PulseSecure. Depending on the server configuration, you might need to change the user agent to Windows and have a fake-host-checker-script. The Linux version of Pulse Secure requires your administrator to configure linux as "supported" on the server-side which is often not the case, which makes it pretty much worthless.
One thing I noticed is that the network-manager-plugin will disconnect you when changing networks, while the command-line-version reconnects without a need to re-authenticate.
In another instance, I used a windows host with Win32-OpenSSH. I used a proxy.pac script for web browsing and for SSHing I tunnelled using the ProxyJump option. You could also configure your Windows VM to act as a router and set the routes in the Linux host to go to the VM.
If it's possible to do that, then the host scan is useless from a security perspective, as that means some sort of malware could also claim to be compliant.
That describes 90% of enterprise security: no reasoned threat model, only stops the most primitive / old attacks, but the vendor has a nice PowerPoint deck and said it’d check a box for the auditors.
This has been my simpler workaround at times. Use VPN for intranet specifics and do rest of work from Linux with shared folders just in case on the Windows side. Though we're not forced to use Windows for our VPN but I only ever got it working on Windows first, recently on Ubuntu, hopefully I can export my settings somehow.
+-Software
|
+-Wine
|
+-Direct3D
| |
| +->csmt
| | [DWORD Value (REG_DWORD): Enable (0x1) or disable (0x0, default) the multi-threaded
| | command stream feature. This serialises rendering commands from different threads into
| | a single rendering thread, in order to avoid the excessive glFlush()'es that would otherwise
| | be required for correct rendering. This deprecates the "StrictDrawOrdering" setting.
| | Introduced in wine-2.6.]
(Codeweaver's contributes perhaps the majority of code to Wine 3.0, so Wine 3.0 will also run it well, but they have a terrific product because they allow you to essentially sandbox each Windows app into a different wine "bottle" container with different settings for each, and by providing default settings and switches for most major apps.)
Interesting. Just today, I decided I'd had enough of outlook 365 Web client. Maybe I'll give outlook 2016 and wine 3.0 a try - before I fall back to plain imap etc at work.
Don't know if Outlook works. That part was the one part of Office that always caused me issues in Wine (but, frankly, I haven't tried it in years). Word, Excel, PPT -- all worked great otherwise. (small tip: if I disabled compositing in my window manager, then the translucent selection drag box in PPT turned black, making it impossible to see what I was selecting.)
Also, there's a very simple extension for OWA in the chrome app store; it just tricks OWA into thinking you're running Chrome on Windows instead of Linux (where microsoft subjects you to the ancient version instead). I know the author pretty well ;) However, I haven't been forced into running OWA or connecting to Exchange at all, so it might have bitrotted or not be needed anymore:
Last time I tried wine was something like 15-20 years ago. I thought it was a great project but that they would never be able to get it right. Is it working well now?
In any case, congratulations to the developers for their work.
Decent considering the magnitude of the needed effort. I use it to occasionally play Age of Empires 2 on Steam running over Wine. It is playable, but has still some bugs. For example embedded browser doesn't work with Steam and there's this years old bug, for which there is a patch that is ignored by mainline, that makes your view scroll sideways.
I switched from OSX to Linux over the past year and being unable to play music I bought from iTunes BEFORE they removed the DRM is my biggest pain point.
Actually tried that and even then those songs re-download with the DRM on them. For some reason there is this window of time when songs were purchased where they won't allow the DRM to be removed. And Apple got the extra $25 from me.
My best option is probably to sit down and put this sleeve of old blank CDs to good use and burn all the songs so that I can re-rip them as MP3. It's just irritating.
Wouldn't bother me so much if it didn't happen to include the music that I code to (First Iron Man sound track. Consistent paced instrumentals that all seem to flow well together.)
If you've paid for using the music surely it's morally better to just torrent the tracks: It must be far less wasteful in energy and time. No one is suddenly going to get a copy tortuously that wouldn't have obtained one before.
I feel like the last time I tried it wanted me to install some things and there were some errors on Mac? I don't totally remember though, but it wasn't smooth sailing last year or so.
It's been a while since I haven't use wine. Actually, it's been since Steam started its push for linux support in games.
But it still happens that I will use wine for games I really want to play and which do not have linux support (it's extremely rare I want a game that bad, the last one was No Man's Sky). Being able to launch them just like a native app when I feel like playing is a huge advantage (not even sure it would work properly in a virtual machine, with its dedicated RAM and CPU limits).
So I guess my usecase for wine is "gaming fallback". Not that I speak for the majority, obviously, let see what other people have to say about it. :)
PCI passthrough is still quite tricky to get right on many systems, and most of the people doing it reliably and stably with common hardware are pretty strong linux users who configured a KVM host to do passthrough.
Also, installing, updating, configuring, and maintaining Windows in a VM is substantially more hassle than installing a game in a CrossOver container. More people are familiar with doing the former than the latter, but the latter is still a lot less effort.
They are not even remotely the same thing. WINE is a compatibility layer that translates the Win32 API calls into POSIX equivalent. VirtualBox is virtualization where the whole OS is being virtualized.
The OS isn't being virtualized in VirtualBox, the hardware is. VirtualBox pretends to be a second computer inside your computer, but that second computer still needs its own real copy of an operating system.
Both options should be very fast, but Wine will generally have lower overhead (no hypervisor) and in some cases the app will run faster in Wine than it does on Windows. In VirtualBox the app will always run slower than it does on just Windows.
Wine also gives programs much better access to the GPU.
Everything is 100% native and executed directly on the processor, there is no intermediate layer or translation occurring like it would for a VM.
When you run Wine, you are not running Windows, you're running unrelated code that just happens to do the things that Windows programs expect. For example, when a Windows program asks for a window, it asks in the Win32 API. Wine knows this API and responds to the request with an X11 window, like any other program on your system would get. Wine does this for almost all Windows API calls; it takes them and handles them in a way that makes sense for a POSIX system. The application being executed never realizes it's not on Windows because the API is fully satisfied, and Wine makes sure that function calls that mean "give me a window", "make a sound", and "write to a file", etc., all do things that make sense, just like MS Windows does when the same applications make the same requests on a system that runs MS Windows.
Because Wine has its own implementations of these functions and isn't just executing Windows code on a virtual processor, Wine's version of the function may be faster or slower than the MS version because it may satisfy the request with more or less work. All that matters is that the implementations meet the caller's expectations.
It depends on what you do, which version. Photoshop CS2 or CS4 should be a breeze on most systems, but you need RAM to make it work. 4GB is a minimum, 8 or 12 is much better. About 5 years ago I tried Wine with Photoshop, but couldn't get it to work. After a while the windows fell apart, so to speak. Buttons failed and so on.
After that I moved to VB and that is for me the most reliable, especially with the right snapshots available. That is: right after installing Windows, then after configuring it, then after installing Photoshop. The option to go back to a previous state is great.
To be fair, Microsoft has yet to remove all the Windows 95 style design. The Windows 10 installer has at least 3 different eras of design visible at once.
[1] https://imgur.com/a/k0HI0
[2] https://www.reddit.com/r/linux/comments/7ql4kl/the_screensho...