I just don’t see how an iPad Pro could ever really work as a computer replacement for true professional workflows. I’m talking about multitasking between final cut, building a website, creating assets in photoshop, taking screenshots, downloading things from an email...these are daily tasks for me. An iPad would struggle to do one of these at a time let alone all of them at once. iOS is just too limited to ever make it a real “pro” machine. For limited things like a photography workflow or photoshop workflow, maybe, if that’s all you do. So I think it’s time to give up on the idea of an iPad as a true pro machine. It will never replace your laptop and Apple doesn’t want it to. It’s a shame though because the hardware is certainly capable enough. It would require Apple basically throwing out iOS for iPad and designing a whole new way of interacting just for iPad. iOS is great for iPhones but not great for professional applications. It would be cool, but I doubt Apple will do it.
I’m an artist who owns one and I totally agree despite it now being an essential tool for me. Lots of airdropping files back and forth with my MacBook is the reality and it works great.
You quickly realize just how crucial a mouse and it’s modifier keys are for efficient design work. It’s great for early stage thumbnailing. I’ve actually never been able to thumbnail as effectively as I can with the pro and it’s solely for the fact that I can use it as a sketchbook with integrated google images. Once a project gets rolling it’s mostly used so I can shuffle over to my comfy chair and do some detail work with the pencil.
That said, I wouldnt put it past a young person who hasn’t been molded by a desktop to blow me away with it. It’s not like my desktop workflows don’t have hacky and annoying steps along the way to a final.
Right it can work for part of the workflow for specific things like detail work with the pencil. What I find challenging is actually finding a smooth workflow that it’s not wasting time trying to get the iPad to do what I want. I feel like I spend most of my time working around it’s limitations so it’s hardly worth it (for me). Maybe that will improve with time though.
I also don’t really use iCloud Drive, maybe that would make things easier as one of the biggest problems is simply getting the things you want on the iPad ON the iPad. Importing photos for example, you have to import into photos and then double import to Lightroom.
Adobe seems to be putting a lot of effort into Photoshop CC for iPad to make sure that moving back and forth between the desktop and iPad versions is effortless if you choose to work that way.
>the biggest change of all is a total rethinking of the classic .psd file for the cloud, which will turn using Photoshop into something much more like Google Docs.
“The beauty of it with Creative Cloud and the Cloud PSD and the innovations there is that you can just pick up where you left off, and you can be somewhat agnostic,” Belsky says. “You can always go back in history. You can share it and have other people be able to go back and undo things you did.”
I just have an ssh window and web browser open, both of which the iPad mostly handles fine. For me, it was a no-brainer because it's light and easy to travel with, and doesn't require 40 minutes of updates every time I open it (hello, Surface Pro 4, my previous laptop). Does it replace two 32" monitors? Nope. But I never had those when I was travelling.
I also draw and edit photos on my laptop. The iPad handles that fine. I have a Cintiq Pro 13" and it's a piece of garbage that made me hate drawing. I am glad it's gone from my life. The Apple Pencil is way better. (I hate Lightroom/Photoshop on the desktop, so I am trying to deal with Affinity. We'll see how that goes after I go on a trip where I take some photos.)
All in all, there are many tasks that can't be done on any laptop. That is why you can buy desktops. If the primary goal for your mobile computer is to not have to spend 40 minutes disabling Cortana, the iPad is perfect, which is why I bought one.
My only complaint is that Safari requests mobile sites instead of desktop sites. That is super annoying. I have a desktop-sized screen and a keyboard. Give me the desktop version.
> My only complaint is that Safari requests mobile sites instead of desktop sites. That is super annoying. I have a desktop-sized screen and a keyboard. Give me the desktop version.
Hold the refresh button you can reload the page as the desktop version
Which is a good thing. Adapting layouts to all those different, wildly varying screen sizes is essential and responsive webdesign is solving this problem.
That's fine, but what I don't like is when websites make certain features unavailable on their mobile websites and provide no way of getting around this restriction.
There are iOS browsers that let you change the user agent and request desktop site. I'm not sure how well they work though since I haven't had a need for that feature. But you should check them out at least.
> An iPad would struggle to do one of these at a time let alone all of them at once.
You haven’t touched a modern iPad, have you? Not only do they not ‘struggle’ to do one of those at a time, they’re butter smooth at it. The workflow/task juggling aspect is definitely an issue, but there is nothing preventing an A10x or A12x iPad from doing any of those things with all the performance you’d expect from a $3k+ MacBook Pro.
GP is talking about how the design of iOS is preventing one from doing all those tasks in a satisfactory way, but not the hardware.
For example I could be editing a movie and waiting for it to export. Then I switch to a different app and start editing my photos. There is a Task Completion background API to allow the former to continue to run, but there is no guarantee nor any notification when iOS decides my new photo editing app is consuming too much resources and jettison or kill the movie app. It contributes to this nagging feeling of needing to switch back to the background app so that iOS would keep it alive.
It's just an app management model ill suited to professional work.
I own an iPad Pro for reference. I should have been more
Clear: while the iPad can do some of those things and certainly the hardware is good enough, iOS dowsnt fit my use case well (doing lots of things at once).
I'm with you, but we have to remember that most people including most "pros" meaning working professionals do not do coding. For 95% of working professionals, the iPad may actually be more than enough.
Right. My use case is probably not the average one. I suppose if I did a lot of just emails and word processing/web browsing it would work. But it kind of defeats the purpose of all that horsepower.
Until Apple puts its own pro applications (like final cut) on the iPad, it’s not a real pro machine in my view. If it’s so “pro” then make your own apps work on it!
It always takes a few decades for UIs to catch up with new hardware. We still don't really understand how to design touch UIs yet. We basically just squish WIMP interfaces a little so you can fat finger them. There has been no radical reimaging of UI toolkits for touch yet.
The software industry is so crippled by finance thinking right now anyway, everything research-y moves extra slow anyway. We're in the second dark age of UI.
People are doing weird pointless stuff in VR UI right now, so I think that might be where the ideas get worked out. That's how it worked before, we got out of the WIMP stagnation because a whole new paradigm (hypertext) emerged.
I'm not sure why you'd expect a radical reimagining of UI for touch. Touch UIs look largely the same as WIMP UIs because there isn't a radically better way of presenting information and affordances on a screen.
Have you interacted with many professional UI designers? They almost universally have no time, they are forced to rush out mediocre work on a strict timetable.
I have a design background myself, and frankly, you're not going to do much better for flat screen-based UIs than buttons, scrubbers, tables, lists, scrollviews, etc. This has little to do with touch, and everything to do with 2D design for screens. Better/worse design will depend on how these idioms are used/organized, rather than a radical rethink of those idioms.
That's not to say there aren't good undiscovered idioms for 2D UIs, but there's no shortage of designers trying to find them, especially outside of their dayjob. Designers love trying to invent new and novel ways to interact.
It is, however, actually an advantage for touch UIs that they aren't radically different from WIMP, because users don't have to learn a completely different UI, at least in terms of visual organization/affordance.
You can also bet the Apple designers/engineers who were prototyping iOS UI idioms before the iPhone was released explored a range of different idioms, and they continue to do so.
It works fine for me besides having the right software. The multitasking actually works in my favor; I am more focused on the iPad and organize things differently (less messy) because I know multitasking is bad & buggy (switching between terminal programs and browsers often have buggy side effects, so I make sure I have what I need before I use the terminal and then only use the terminal until I'm done). It is far from perfect, but the Windows updates (how do they get away with it) and low battery life of laptops are worth it for most of my work.
Using UIPanGestureRecognizer for gestures that should be recognized immediately on touch down is a common mistake I see a lot of people make. The problem is UIPanGestureRecognizer doesn't actually start recognizing until the touch has moved a short distance, and at this point, you can't recover the original touch location, only the location at which the gesture recognizer began recognizing.
The solution is to use a UILongPressGestureRecognizer with minimumPressDuration set to 0. With this configuration, it will begin recognizing immediately on touch down, allowing you to take action even before the user has moved their finger (or pencil).
Admittedly you do lose the convenience methods from UIPanGestureRecognizer (sure would be nice if Apple would add them to UILongPressGestureRecognizer too), but you can recover translation just by grabbing the location of the recognizer in the associated view when the state moves to .began and then subtracting that from the current location on every subsequent .changed state callback. Calculating velocity is harder of course, but you could do it without too much difficulty by recording the previous location and timestamp and using that to calculate an instantaneous velocity (I'm not sure if this is what UIPanGestureRecognizer does or if it tries to smooth the velocity at all using multiple "frames").
I don't have any linkable examples offhand, but at a previous company one of our standard interview questions was to implement custom drag & drop of a view using a gesture recognizer, and our reference app used UILongPressGestureRecognizer because it let us do such nice things as visually "lift up" the view (scale it up slightly and add a drop shadow) the moment your finger touched it.
Actually quite well. The fact that it's drag & drop is pretty irrelevant, except in that it's not something the interviewee has likely done before because it's not a very common thing to interrupt.
What's more important is that it's fairly basic usage of UIGestureRecognizer (which is a common thing to need, at least if you're doing any kind of UI development) as well as some basic management of state involving 2 views (the view being dragged, and the destination view it's dropped into).
You could do that, but it's more work than using a gesture recognizer, and doesn't compose as well (for example, with the gesture recognizer approach, you can easily add in other behavior like conditionally enabling a tap gesture recognizer that has higher priority than the drag gesture recognizer, without even having to touch the code that implements the drag beyond having access to its gesture recognizer object).
Using an instant LongPress seems like sort of a hack though.
Interesting though, how do you transition from the instant LongPress into the "Pan" gesture? Or does the LongPress by itself also get the 'moved' events?
Couldn't you also subclass UIPGR and override `touchesBegan`, etc? Gesture recognizers will receive the touches before the parent view does, so you can just saved the position when touch starts on it, and let it pick up from there.
Cool hardware, but there's no way I could justify the price to myself with the limited functionality of iOS as it stands today. A lot Apple tech writers/podcasters keep saying "just wait for iOS 13!", but the earliest a developer is going to see iOS 13 is 6 months from now at WWDC, the earliest the general public will get any additional functionality is 11 months from now. That's a LONG wait to see if any changes will be made to make these capable of doing real work for all but artistic professionals.
I think given the specifications of these devices, the pricing, while definitely high, is not completely unreasonable for what the hardware can do - these iPads do things no currently shipping Mac notebook can - 120hz refresh, 240hz touch/pen tracking, FaceID, performance in similar ballpark to a $3k MacBook Pro while passively cooled (!) etc etc.
It’s just such a shame iOS isn’t quite ready yet to provide the kind of computing experience the hardware is clearly capable enough to deliver. If this is going to be priced like a macOS machine, it needs to offer a reasonable set of Mac style functionality - for me this means proper multi-user support so my family can all use the device with their own accounts and services and ideally some kind of windowing model for the OS to better handle multi-tasking.
iOS will need to answer a lot of questions in the next few years that it has so far steadfastly avoided. Apple bragged how these iPads can drive 5k monitors via an Xbox one S performance class GPU, but without indirect input methods (e.g. a mouse) this feature is really not much use for anything other than display mirroring. Today I only ever use video out on iOS devices for occasional video playback on TVs, I suspect I’m not alone in this.
iOS will likely either need to support a mouse or Apple will have to create some kind of standard for multi-touch external monitors that can allow iOS application input via touch on the external display if professional class workflows using multiple monitors on iOS is ever going to be a mainstream thing the way it is on the Mac.
I think given the specifications of these devices, the pricing, while definitely high, is not completely unreasonable for what the hardware can do - these iPads do things no currently shipping Mac notebook can - 120hz refresh, 240hz touch/pen tracking, FaceID, performance in similar ballpark to a $3k MacBook Pro while passively cooled (!) etc etc.
Totally agree, and if they had XCode or a decent native C# IDE on there I'd be at an Apple store right now forking over fist fulls of cash.
I'd be surprised if Apple ever does more than the current two-window model. A multi-user feature would certainly be nice though.
I think iOS is mostly fine. I don't think it's supposed to be a macOS replacement. There are some things a Mac is better for and other things an iPad is better for. Apple wants you to have both.
You can actually kind of fit three windows on an iPad: two in Split View and another one on top in Slide Over. Even if you can’t interact with the background windows when Slide Over is active, they continue to refresh their contents.
Fun fact: I believe you used to be able to interact with the split viewed apps in early betas of iOS, but then the feature was removed and restricted to only when you had one app open.
You can interact with two Split View apps and one Slide Over app at the same time, but this is limited to newer models (I believe 2017 iPad Pro or newer). If you are on an older iPad, it is as you describe.
Yes, but this used to be enabled for all iPads. The feature was then removed, probably because older ones just couldn't keep all the apps running reliably.
I did not hear "wait for iOS 13"; searching Google I see a lot of talk about features that do not seem very interesting. What would it be that we are waiting for? Only thing most (here and elsewhere) are looking for is better multitasking (rather; side by side 'anything', so the same app with different or the same docs open or different apps with different docs and then copy/paste + drag & Drop between them) and more freedom for devs to make better software. I do not read those features in the expected new features for iOS 13 but maybe I am reading the wrong posts?
- Lack of mass storage support through USB-C and the Files app (this seems like a comically boneheaded omission that Apple is going to dig their heels in until iOS 13 a year from now).
- No XCode
- No Final Cut Pro
- No Logic Pro
- No pointing device especially when used with an external monitor (I don't think anyone should their breath on this one though).
Granted, the audience for XCode is small, the audience for Logic Pro is small, people wanting a pointing device is small, etc... but taken in aggregate you end up with a large group that still can't call this iPad a "computer" the same way they can their laptop/desktop.
But are any of these fixed with iOS 13 besides the storage? XCode would be really good to have. Pointing device I do not share the pain of, but like you said, it's a small audience for any of those. I think XCode might be bigger as that can touch the 'first programming experience' and educational space here. An actual mature environment that doesn't require all the setup and misery of desktop development.
And another reason to wait a year at least to buy a new iPad Pro if you already have one. Wait till A13x or A14x to see an iOS that can utilize it properly.
> Wait till A13x or A14x to see an iOS that can utilize it properly.
What will these future processors do that's so earth-shattering that it can't be supported on today's iPad Pros, which will run new versions of iOS for many years?
Yep, I have a feeling this will happen alot, even if Apple waits 18 months to refresh iPad hardware that's still only 4-6 months from iOS 13's release. And maybe some hardware criticisms, like only one USB-C port, will be addressed (probably not though)
When it comes to programming, I just can't see myself typing thousands of lines of code on a tiny iPad screen, no matter how clever the gestures and multitasking in iOS gets.
Maybe one day the iPad Pro + stylus will do wonders for visual programming languages like VVVV (can't think of any others).
> When it comes to programming, I just can't see myself typing thousands of lines of code on a tiny iPad screen
iPad Pro has a 13 inch screen; that’s the same size as my MacBook Pro’s display. With a good keyboard, such as the one Apple sells, I don’t see why it should be difficult to write code in an iPad.
I find it difficult to write code on even a 15-inch MBP display. It feels very cramped to me. There's barely room for 2 adjacent windows, so I spend all my time scrolling and switching desktops and re-layering windows.
13" is barely bigger than the 12" display on my Apple IIgs (late 1980's), or my first laptop (mid-1990's). That was a perfect size for a VT100 emulator (it's the same size as the original 1978 VT100!), but that's not how I write software any more.
The same size screen is actually relatively worse than 20 years ago, because these days all documentation is online, so it has to share my screen.
This is an iPad programming system I was hacking on a bit ago: https://www.youtube.com/watch?v=hQUq2INrdiU Also involved is me live-coding a fix for the system in itself (like you would in Smalltalk envs).
> To avoid having to re-create the same key commands multiple times, and to only do so when actually needed, we’ll use a lazy property to keep track of them internally as well
Honestly, I think this is a premature optimization. I don’t think it’s worth trying to prevent the allocation of a four element constant dictionary, especially one which requires awkward syntax (indirection through a computed value) to work because it must fulfill a protocol requirement.
I can imagine that Apple is ready to release an OS for the iPad with the full functionality (file system, mouse & cursor) and would do so in the event of the more expensive and port-lacking new Air undersells. Wouldn't surprise me if they had it ready as a backup plan, especially seeing the success of tablet computing with the MS Surface.
> especially seeing the success of tablet computing with the MS Surface.
MS Surface sales last quarter: 602,000.
iPad sales last quarter: 9.6 million. Even the Mac shifted over 5 million machines, all the more impressive when one considers that for much of that quarter staple models such as the Air were long, long overdue a refresh. The Surface sales figure presumably includes models that aren’t even tablets - the “Surface Laptop” is likely one of their best sellers I’d assume, given the attractive combination of price, design, specs and generally positive reviews.
While I too would love to see the features you suggest in iOS, Apple’s success in the tablet space is largely unprecedented for a computer manufacturer.
I find it so weird that people continually talk about apple as if it’s producing one failure after another and they need to start copying less successful products, when it’s the largest company in the world and has more profits by far than any of their competitors in any market they’re in.
Fair enough, the numbers speak for themselves.
What strikes me as interesting though is that the Surface uncovered a lot of people (especially students on my campus) willing to use tablets as a primary device. I'd bet that a lot of people would shift from Macs to iPads if they had a more comprehensive OS.
I'd be interested in seeing what portion of total Surface revenue is from the Surface Book and the Surface Laptop, which are fairly traditional products compared to the tablets.