Hi everyone. I know that's a bold statement you'd expect from a new coder and not an experienced guy unless it's the real deal. And I've got so many decades of experience that I'm really looking forward to hearing feedback :)
Having written ObjC since 2009 or so, honestly it's a fine language, and although I wrote a fair bit in Swift, I don't really see it as a significant improvement over ObjC, which is Good Enough™ to keep using. Something has to cause serious friction to be replaced with something significantly better, and ObjC/Swift just don't fit that pattern.
Yup, Objective-C's object system was really flexible and powerful to use. Dynamic mixins, swizzling, etc. were all useful tools for me. Having messages as first-class citizens was probably the most important part to me. It helps make the object system expressive enough that I don't remember writing much design patterns ;)
But seriously though, CLOS and Smalltalk-style OOP is probably the only flavor of OOP I really enjoy to use, and Objective-C gets you way closer to that than C++ and Java do. (e.g. the way KVO is implemented relies on "isa-swizzling", or dynamically changing classes at runtime)
Java is more like Objective-C and Smalltalk, than C++. It only took the syntax from the latter, the semantics and dynamism are from the former and reflect the authors experience with Objective-C frameworks at Sun.
Even JavaEE was initially born as a Objective-C framework, Distributed Objects Everywhere.
It's a false dichotomy to dislike OOP or prefer it. It's like saying I prefer hammers over screwdrivers. Just learn how the tools you have should be used and use them well.
The only app I'm currently maintaining and proud of[1] makes tons of use of "traditional" OOP. It uses lambdas and FP when necessary. I think it makes absolutely no use of JavaScript's dynamic features. I'm fairly sure this code would port easily to ObjC.
After 15-20 years, you just get bored of doing things in novel or "pure" ways, and do the bare minimum needed to get the job done that's in front of you.
I am not sure if you understood my post. I am in no way saying "OOP is bad in general" or even "OOP is good in general". What I am saying is "I strongly prefer Objective-C's object system over that of other languages." Then I provided examples of other object systems I liked, and how Objective-C feels close enough to them that I don't miss them when writing Objective-C.
Maybe saying "flavor of OOP" was too vague, but I am talking about implementations of object systems, not the (ill-defined) notion of OOP.
Using your analogy with hammers and screwdrivers, my post is less "I prefer screwdrivers over hammers" and more "I prefer screwdrivers with bit holders over screwdrivers without bit holders"
There’s still many people who regard OO in Objective-C as “purer” OO than, say, Java (or something like “the correct way”, whatever that means). I think that’s what they were referring to.
Oof, hard disagree. I absolutely hated writing Objective-C for years– I felt like I had to write unnecessary 'glue' with header files, handling of 'nil' was always jarring, and square brackets at the start and end of every call felt horrendous, to me at least.
I relished the day Swift was announced, and have been using it ever since.
Agree -- my experience with Swift is that it's far more readable and closer to my personal aesthetics than Obj-C, but I actually struggled a lot figuring out how to write things the way that felt intuitive to me (ex. maintaining an observable global state + config that you can access from anywhere, easy declaration of and access to arbitrary/deeply-nested associative array keys, JSON handling, declare-once-use-anywhere icons and colors, that kind of stuff).
Once I had all the convenience guts in place, writing actual functionality has been a delight though (outside of the overly-verbose let/guard and type casting)
That said, I'm pretty sure I'm also probably just hard headed and doing it wrong, and could've learned the accepted patterns/methodologies lol
I always thought the square brackets were clever. Like wrapping a letter in an envelope – which is a great metaphor for the message sending the syntax denotes.
Objective-C programmers say this, but I note that I've never once heard a Swift developer complain that it's too hard to discover API interface or keep things non-`public`.
> never once heard a Swift developer complain that it's too hard to discover API interface
Xcode presents the equivalent of a “header” when you follow a symbol to a framework you don’t have the source for… it’s a swift file full of definitions only and no implementations. The compiler emits this for you automatically as a .swiftinterface file
> or keep things non-`public`
I definitely am a swift developer that would complain about this. It’s way too easy to be cavalier about using the “public” keyword and making things part of the public API when they probably shouldn’t be. It’s like engineers have muscle memory from Java and just type “public class” without really questioning why first.
In my current and previous job, we talked about (and partially implemented) low level “contract” modules in order to avoid linking (and building) the entire module in order to share behaviour
That problem was already solved with header files; trivial to split interface from implementation, they’re just two different files. But sometime around the 90s, probably Java and this was deemed inconvenient. Now we’re trying to reinvent that same pattern
Everyone's brain is probably different, but when I first started writing swift I definitely missed header files. When I switch back from a c project I miss them again.
Only access control I wish Swift had is typeprivate so I could hide private things but make them available for subclasses (or perhaps protocol conformers). Unfortunately Apple has only added a package level so far, which seems fairly useless (you're either too big for it to be useful or too small to need it). Obj-C didn't really have ACLs at all, you just hid stuff in interfaces. Once found, those interfaces were no protection at all.
IDEs have improved so integration of Swift and searching is easy. Objective-C now could do without headers but I used it 25 years ago and having headers made life easier.
In the whole I don’t mind Objective-C, but when I have to write it these days I definitely get annoyed by having to navigate and maintain header files. It’s more extra overhead than one might realize.
My other complaint with it compared to Swift is how one needs to pull in a bunch of utility libraries to do many things that come stock with Swift.
It’s less verbose (even if I’m not a square bracket hater. It has some really nice new abilities like async (way easier/cleaner than callbacks in many situations) and now actors.
But honestly 90% of it is true type safety. The type system is so much more powerful and expressive compared to Obj-C.
There is only one downside, and it’s real. Compiling Obj-C was instantaneous. Swift is MUCH slower, which also slows down error messages and hints. And the fancy type stuff can even timeout the compiler.
Combined with some Xcode issues (stale info anybody?) and it can be a pain.
Message passing goodness and flexibility (almost!) of Smalltalk, coupled with C for all things low level and perf related. It's a great language! I've switched to Swift for all things Apple these days, but I still miss coding in ObjC.
I agree that ObjC is nice, and proven ObjC codebases probably don't benefit enormously from being re-written in Swift, but that has little to do with how much better Swift is (and it is much better, IMO).
Swift is an improvement over C for the problem domain. Something akin to Swift as a modern C replacement with keeping of the 'Objective' bits of Objective-C layered on top of it would have made for the ultimate language, though.
Objective-C does just about everything it can to make sure you can mess with it at runtime and confuse the ever living hell out of any type checker that wants to be strict.
And the additional strictness is one of my favorite parts of Swift.
In theory, you are only reaching for the 'Objective' parts of Objective-C when your code actually benefits from being object oriented (in the Kay sense). Otherwise you can stick to pure C.
Of course, C has a lot of ugly traps which makes it less than ideal for this domain. This hypothetical subset language addresses those issues. While, again, you would only reach for the 'Objective' parts when your code benefits from being object oriented.
It is true that the inherit dynamism of message passing makes static analysis impossible to cover all cases, but as with all things in life there are tradeoffs. You lose the nice aspects of object oriented systems if you do not allow for that, and OO is particularly well suited to UI code.
Of course, Swift abandoned the object oriented model completely. Which is fine. But Objective-C showed that you can have your cake an eat it too, offering OO where appropriate, and a non-OO language for everything else.
Objective-C's downfall was really just in that C didn't age well – which, among other things, I am sure contributed to seeing the use of the 'Objective' bits where they weren't really appropriate.
Technically true if you enable it with the @objc flag. However, the documentation suggests that you are only use that when needing to interface with Objective-C code, so it is not how one would use Swift in a pure Swift environment. Swift's primary object model design is much more like C++.
Playing with words doesn't change the fact that Swift fully supports OOP, even without any presence of @objc annotations.
Classes, interfaces,interface and class inheritance, polyphormism, variance, type extensions, compile time and dynamic dispatch, overloading, associated types.
The original comment clearly states that, for the purposes of the comment, Kay's definition for OOP is in force. Your personal definition is cool and all, but has no relevance to the context established in this thread.
Is there some value in this logically flawed correspondence that I have overlooked? What is it trying to add?
I've had a similar experience, and generally agree with what you're saying. But I am glad Swift was created. All the plebs gravitate towards that language, so Objective-C remains unpolluted. I shudder to think how Objective-C would have deteriorated without Steve around.
Hardly given that all Objective-C features since Objective-C 2.0 were aimed at improving the Swift interoperability story, as Chris Lattner has mentioned in a couple of interviews.
They were aware of Swift, and decided to make the upcoming OpenGL replacement framework in Objective-C instead of Swift, and only provide Swift bindings instead of doing it the other way around, implemented in Swift with Objective-C bindings for compatibility with "legacy" code.
Who is "they"? Chris Lattner works on compilers under Developer Tools. The Swift and Objective-C teams share an office and are often the same people. Of course Objective-C is going to get new features to help import it into Swift, because the whole point of Swift was to make a new language that worked well with the old one. Basically nobody outside that group had any need to know of the language at that point, especially since it wasn't ready for system use anyways. I would not be surprised if the first time most of the Metal team even knew Swift existed was when Craig introduced it on stage at WWDC.
It's been 10 years and I haven't had any issues. Maybe in another 10 years this will be enough of a problem to switch, but by that point I'll be retired
If we're talking about writing from ancient Sumeria, or another civilization in the distant past, than everything we can recover really is valuable. A text doesn't have to be another Epic of Gilgamesh for it to teach us a lot about societies we know relatively little about.
You have an implicit assumption that the benefit of learning information always justifies the effort and time spent on learning it. I don't think that's a given or true.
Now you’re moving the goal posts. You argued that writings from the past are worthless; I pointed out that they have great historical value. Cost-benefit analysis is beside the point. But we’re both here writing comments in a Hacker News thread so the option value of our time can’t be all that high.
It's following a passion, but hardly an "objectively perceived benefit".
Except in the sense that everything is a benefit (including shooting heroin, where the benefit is the high, and so on) where the term becomes meaningless. But even so, it still wouldn't be "objectively perceived". More like "subjectively pursued and felt".
Hard disagree when it comes to any ancient text. For example, many of the oldest cuneiform tablets are simply accounting and receipts. They give a direct insight into what was considered valuable enough to keep track of back then. Likewise, marginalia and even doodles tell us a great deal about what some individuals were thinking about.
This is as interesting to me as any form of literature.
Even if that is true, how do you know if something is worthy of being read without reading it? Isn’t the ability to write it off as unworthy worth the time to read it?
A friend of mine, a historian, says that people only wrote down what was important to them at that time - and that was, more often than not, who owed them money.
Our models of science are all going to be slightly wrong if there actually is a divine being that intervened in the events of history and changed or updated the laws of nature six distinct times during the genesis of the reality we know and experience scientifically. If this actually happened and we don't account for it, but try to answer questions with the incorrect assumption that the laws of nature have always been static and immutable, we're going to derive incorrect inferences.
Hey Jesse, hope you're doing well man. You and I talked about this topic a bit in the past, and since then I've had a little more insight into these issues:
1. Burnout happens when we don't believe in a cause or our belief in it is unjustified, whether we realize it eventually or intuitively understand it without it being conscious yet. In all three of these cases, the reality is just that it's not worth doing in the bigger picture.
Obviously, the question of why something is worth doing is complex. I'll admit that there's plenty of reasons for doing things that we ourselves aren't aware of for a long time.
I had a passion and almost obsession for solving software problems for decades, and I didn't know why, but I just followed the flow throughout it all. Eventually it led to a short-lived career, and the only concrete result of it so far is immaculatalibrary.com, but there's also the extremely in-depth knowledge and understanding of logic that I gained from it all.
Honestly I'm able to continue to work very steadily on immaculatalibrary.com, making incredible progress in the past month alone, without any burnout, because I fully know that it's worthwhile, even without explicitly being able to describe why.
Countless reasons for writing software are purely insufficient. Money, fame, solving problems we think are huge, inflating our egos, etc. Fortunately I have no following for my pet software project, so none of these things cloud my vision, and I'm able to focus on it for what it is, and for its only clear and immediate end: publishing and digitizing certain public domain books I find useful and think other people should also.
2. Burnout also happens when we realize that we're building a mansion on top of a pile of garbage. When my site was written in Jekyll or Node.js + Express.js + Postgres + Pug or any other technology I experienced with, it was incredibly hard to move forward and make real progress. I had already resolved to maintain the website indefinitely, but I gave up on it intermittently when I made it too difficult for myself to make literally any changes to the website, sometimes for months on end.
The solution for me was to start from absolute scratch, examine the principles of how I wanted to write the site, build it slowly according to that, re-examine these principles regularly in light of what I had already made and how it was working out, and evolve and update my method accordingly.
And honestly, during most of the past 2 years, I had a website building app that I am not proud of at any of those iterations. But I am proud of what I have now. And it's very possible that I won't be proud of it in the future for what it is now, in the same way. But that doesn't quite matter. As long as I'm proud of it in the moment, and happy with it, while admitting to myself that it's an evolving WIP, and knowing full well that this is always going to be a dynamic process for myself, then it's good enough for the purpose it has.
But that's just the nature of software. It's never finished, but it can be finished in the moment for the job you need at that moment, as long as you examine what the job is and what it should be. The only way I've solved burnout for myself is making sure that the source and destination are satisfactory to me and that I'm fully honest with myself and open to the facts.
I genuinely feel like there's two camps of professionals: those who know how to do something and do it extremely well; and those who know how to describe something and break it down very well. The former are the ones making the magic happen, like Fabrice Bellard. The latter become very influential teachers like Bob Martin. Both are needed (although the former more so), but the latter usually get all the attention and credit.
There's a psychological basis for people taking pride in a difficult task learned and mastered, and many organizations take advantage of this by creating a culture that rewards this effort when put towards their own products with social status within the organization. It feels manipulative and unethical to me.