From what i've read, it is the "upgrade" to rails for people who want higher scale. I don't know if I agree, as Rails definitely does scale if you know what you're doing. Would love to hear more about others experience using Elixir for fast, low-latency high throughput scale.
Rails is pretty fast today and as you said scales well. I don't think think the addressable elixir market was really ever that big. Even companies like Bleecher report migrated off
I'm working on an invite-only, personalized medical education platform called MedAngle, particularly for emerging economies. I started it in my third year of medical school, and we're now at 90,000 doctors/medical students, along with over 100 million questions solved and billions of minutes spent studying and excelling.
I get to lead a team of 175 doctors and students across premed, medical, and dental education. I am the first doctor + full stack technologist in the country. It's super rewarding. No funding, just off our immensely low price point that things are still growing quickly. All software written in house.
Gotta say, from a branding point of view, it's completely perfect. Sometimes things as "small" as the letters in a companies name can have a huge impact decades down the road. AI == AI, and that's how Apple is going to play it. That bit at the end where it said "AI for the rest of us" is a great way to capture the moment, and probably suggests where Apple is going to go.
imo, apple will gain expertise to serve a monster level of scale for more casual users that want to generate creative or funny pictures, emojis, do some text work, and enhance quality of life. I don't think Apple will be at the forefront of new AI technology to integrate those into user facing features, but if they are to catch up, they will have to get into the forefront of the same technologies to support their unique scale.
Was a notable WWDC, was curious to see what they would do with the Mac Studio and Mac Pro, and nothing about the M3 Ultra or M4 Ultra, or the M3/M4 Extreme.
I also predicted that they would use their own M2 Ultras and whatnot to support their own compute capacity in the cloud, and interestingly enough it was mentioned. I wonder if we'll get more details on this front.
At least they are honest about it in the specs that they have published - there's a graph there that clearly shows their server-side model underperforming GPT-4. A refreshing change from the usual "we trained a 7B model and it's almost as good as GPT-4 in tests" hype train.
Yea, their models are more targeted. You can't ask Apple Intelligence/Siri about random celebrities or cocktail recipes.
But you CAN ask it to show you all pictures you took of your kids during your vacation to Cabo in 2023 and it'll find them for you.
The model "underperforms", but not in the ways that matter. This is why they partnered with OpenAI, to get the generic stuff included when people need it.
Yeah, but Apple wouldn’t care either way. They do things for the principle of it. “We have an ongoing beef with NVIDIA so we’ll build our own ai server farms.”
Apple have a long antagonist relationship with NVIDIA. If anything it is holding Apple back because they don’t want to go cap in hand to NVIDIA and say “please sir, can I have some more”.
We see this play out with the ChatGPT integration. Rather than hosting GPT-4o themselves, OpenAI are. Apple is providing NVIDIA powered AI models through a third party, somewhat undermining the privacy first argument.
Not really. They use ChatGPT as a last resort for a question that isn't related to the device or an Apple-related interaction. Ex: "Make a recipe out of the foods in this image" versus "how far away is my mom from the lunch spot she told me about". And in that instance they ask the user explicitly whether they want to use ChatGPT.
I see what they did here and it is smart, but can bring chaos. On one side it is like saying "we own it", but on the other hand it is putting a brand outside of their control. Now I only hope people will not abbreviate it with ApI, because it will pollute search results for API :P
Yeah I feel like we are getting the crumbs for a future hardware announcement, like M4 ultra. They’ll announce it like “we are so happy to share our latest and greatest processor, a processor so powerful, we’ve been using it in our private AI cloud. We are pleased to announce the M4 Ultra”
It was speculated when the M4 was released only for the iPad Pro that it might be out of an internal need on Apple's part for the bulk of the chips being manufactured. This latest set of announcements gives substantial weight to that theory.
As someone who works across the stack, I've come to really appreciate seeing "LTS" and I think for me it comes from Ubuntu directly growing up as a kid in technology, understanding it means that people are committed to supporting something for the long term.
Obviously I know there are business cases for this sort of stuff and whatnot generally, but as a kid first learning what LTS meant, I've always appreciated Ubuntu for this.
After 18.04 LTS the distro has been headed in questionable directions.
However, it is the only distro that came very close to unifying the desktop, mobile, server and embedded application spaces. In a way it greatly impacted how people approached designs, as the classic heterogeneous build circus approaches often became a trivial role assignment.
It takes a bit of work to get the current builds "usable", but the FOSS curse now tightly couples release cycles to specific application compatibility versions. Or put another way... everything is perpetually Beta eventually, or becomes a statically linked abomination. This is the very real consequence of the second system effect: https://en.wikipedia.org/wiki/Second-system_effect
At least they haven't jammed an AI search indexing snitch into their interface... yet...
I was about to comment that even the 5-year standard support of LTS releases seems to end before I'm ready, but I looked at the release cycle page and 24.04 is posted with a 10 year (ending Apr 2034) standard support lifetime. Is that a typo or did they put the "pro" (paid) support end date in the wrong column?
I run the largest medical education platform in MENAP as a solo founder. I'm Pakistan's first ever full stack technologist who is also a medical doctor (country of 200m+ people). I firsthand have experienced how broken medical education is, and our platform has over 75,000+ doctors/future doctors and 70m+ questions solved. Looking for help with selling/pilots to medical schools and our cost is far less than pretty much everyone else in the globe in the space, while being more innovative.
If anyone is interested or can help, please do ping me - I'm sure we'll work something out. Email is azib [at] medangle [dot] com
Surprisingly, there was only one (!) education startup in this batch. Doesn't bode well for us as we're applying in this Summer 23 cycle albeit with customers, revenue, PMF, etc.
We’re letting you know about Stripe pricing changes for international card transactions, disputes, and sales tax, starting June 1, 2023. We also want to share ways we can help you reduce costs and grow revenue.
International card transactions
"Over the past few years, card networks have increased the total fees that Stripe pays for card processing. Because of these costs, starting June 1, Stripe’s additional fee for international card transactions will change from 1.0% to 1.5%. There’s no change to our standard 2.9% + $0.30 pricing for US card transactions.
[...]
Disputes
Whether you win or lose a dispute, card networks charge Stripe a fee in either case. To cover these costs, starting June 1, Stripe will no longer return the $15 dispute fee for successfully contested disputes. The dispute fee itself is not changing."