It keeps the drink at a specific temperature indefinitely.
With an insulated mug, you aren't putting more heat into the drink but just trying to keep it for longer. In my opinion, insulation works great for cold drinks (I use a giant insulated cup for my cold drinks) but not so much for warm drinks.
The old (maybe incorrect) advice I was always told was that continuing to heat the coffee fouled the taste, hence the recommendation to use an insulated mug/thermos as opposed to a Mr. Coffee style warmed carafe. What makes this gadget different/better?
Much of coffee's flavour comes from a balance of acids and oils. These, particularly the oils, are sensitive to temperature. But unless you're drinking your coffee all day, you're unlikely to notice it. Real spoiling sets in after several hours at the kinds of heat you're probably running your Ember mug at. Most drip brewers with heating elements for the pot keep the temperature too high (often just below boiling). A steady heat is much better for the flavour than reheating. (This is what I remember from working at a coffee shop way, WAY back when I was at university.)
That makes sense. The mug version of Ember (https://ember.com/products/ember-mug-2) is open on top, which seems like a fancy Bluetooth version of an old-school mug heater.
I could see the sealed one (the "Cup") as working better though.
- Why: lets me drink tea because I hate cold tea. Bleck.
Under $1000:
- 49" ultra-wide monitor to replace my dual monitor
- Why: I use i3-wm. It works fantastically on it.
- 5.1 Soundbar for computer
- Why: I don't have to wear headphones constantly and also sound quality is so much better
- Mount 50" TV above my desk
- Why: Allows others to hang out in my office with me while I work / play games. Additionally, it allows me to turn something on without interrupting my monitor space.
---
Breaking the rules:
Over $1000:
- EightSleep
- Why: Huge huge huge change in sleep quality plus it's really nice to be able to choose whether to get into a crisp, cold bed or a warm, toasty bed. The Autopilot stuff is interesting and does seem to work to keep you asleep but sometimes it can cause you to wake up especially in the transitional periods of the year (spring and fall).
I would agree. Except that now they seem to make it so that it is pretty pointless without a subscription. Kind of sucks that you have to pay $15/month to actually use the product after paying close to $2000 for the product.
I still use mine only because I was grandfathered in to be able to set temperature set-points without a subscription.
If I were to need another one, I would look for an option without a subscription.
That's just insane. And it's not a subscription you can ever stop paying unless you get rid of the 5000EUR bed itself. Absolutely a dealbreaker.
What if they increase the fee to 30EUR a month? Or 50EUR a month? That's guaranteed to happen once main product sales start slowing down and investors still want their quarterly gains.
I am kind of waiting for the other shoe to drop because the company has been nothing but amazing when it comes to their support. It has sprung a leak three times, about once every 18 months, and they've always replaced it with no additional payment.
I have a feeling next time it's going to involve a lot of cash being exchanged and I'll be faced with a choice.
---
The subscription thing is very much a bummer. I hadn't figured out why they weren't shaking me down for it until I also put together the grandfathered in thing. It is extremely distasteful to have a required (?) subscription on a product that is this expensive.
I don't entirely get how spending $2k - $5k on a bed system (not a mattress!) isn't enough. Just pure fuckin' greed if I am being honest.
I get they need to make money and their product isn't a monthly or yearly re-purchase but it's just short-sighted.
Like let's be real honest here: their product almost certainly doesn't cost anywhere near $5k in materials. The margins on their product are likely already completely stupid. My guess-timate would be maybe $500 in parts. It's literally just a pump that heats or cools water that is app-controlled... not exactly anything crazy. If you look at something like Withings sleep tracker, it's $130 so even the fancy pants sleep tracking stuff isn't some crazy tech even if we take Withings' margins as the actual price.
EightSleep mattress cover is a GAME-CHANGER. Precise control over preferred temp, and detailed/accurate logging of sleep metrics == way better sleep, and knowledge of its relative quality (metrics for which let me measure the impact of different variables).
Never heard of them, temperature control sounds amazing.
How is this product from a privacy perspective? Is the data kept local or is it some cloud service that tracks all metrics?
How do they work thermodynamically? It should make the room hotter, but I guess you don’t notice because the surface you are touching gets colder? Is it a closed loop heat exchanger or is it evaporative?
I've never tried a cooling electric blanket. My wife has struggled w insomnia for a long time, and the eightsleep tracking was almost as much a draw as the cooling.
For hot drinks (teas, coffee & etc) I prefer a simple 0.7l thermos and small chinese teacup. Put the hot stuff in the thermos it will stay hot for at least 6 hours, put a bit of the hot stuff in the cup, since it's small it will get to a comfortable level of hotness in 30 seconds. Drink it slowly and refill. |
Works in the park/library too and you only heat the water once and don't need electricity to keep it hot!
A cheaper alternative to EightSleep is Sleep.Me… same concept with a heated/cooled water circulator + mattress pad, and they have an “autopilot” add on. I just manually set my temp schedules based on the season since I don’t like subscriptions for stuff like that, and I don’t want my sleep data on their servers.
I see that the cheapest Ember is now $130. My first-gen is about six years old, and the battery is about shot. I hesitate to drop another $150-ish on a new mug, but OTOH, is consistently hot coffee worth $25/year? Yeah, it probably is to me, so I'm going to have to suck it up soon.
You can buy only a replacement mug (without coaster/charger). Also, get in contact with customer support mentioning the battery, they usually offer a discount code.
I had to deal twice with them: coaster stopped working 2 years ago and more recently paint inside the cup came out. Both time, they replaced the broken part at no cost. Probably the first time in my life I'm using and happily recommending customer support of any kind.
They're often much cheaper at Costco if that's an option and you don't care about a specific color. They are a complete game changer though, especially if you're a new parent.
I've purchased two ember mugs over the years and end up not using them, a thin double walled glass mug paired with an always on water boiler has been amazing. A kettle is great in Europe because they boil so fast but in the US a boiler is the way to go.
Is there anything similar to 8sleep that doesn’t have all the bells and whistles and doesn’t require an app? I don’t care about any of the tracking or alarms or whatever else besides the temp control.
I tried and tried to sell stuff to "privacy freaks". Turns out that type doesn't like paying for closed-source services and generally will turn their nose up to subscriptions. I am one of these "privacy freaks", love them, but they are honestly probably the hardest people to sell to.
I stopped trying to market to them. It is also a nightmare to try to reach that audience too because they almost always have ad blockers and any space they hang out in very quickly sniff out ads or outright ban talking about products they created.
Furthermore, privacy people tend to be technical themselves so they would prefer to build a solution specific to them rather than use someone else's solution unless it is difficult or requires specialized knowledge they don't care to acquire.
You should look into smoothing out the UX with toast or other notifications so users know whether they are waiting or not. For instance, "Daily Challenge" seemingly does nothing. Single player "Survival" is seemingly stuck waiting for another player.
It seems interesting but unfortunately I am entirely confused on how to use it. Buttons don't work, the tutorial references highlighting that isn't happening, other buttons seem to require multiple presses to work (such as selecting rock, paper, or scissor).
The confusion is completely understandable! As it looks like there was a merge mixup in git. It'll be back up and running soon, and I appreciate the input!
Since this thread is going to be bait to more experienced Elixir devs, can someone tell me books, blogs, etc that you like that discuss performance in Elixir?
I am working on an Elixir project that in no circumstance should be taxing but it's falling over miserably at like 100 - 200 events per minute. The detail is it is distributed-ish IoT. I didn't write it, the person(s) who did write it are gone, and no one else writes Elixir. I've gotten some good gains already but I'd like to squeeze the 80 - 90% of juice I think I can get before resorting to beefier hardware.
I've gotten into instrumenting and measuring it and I have some ideas but I'd love to hear others point me to other ideas. The real problem is that the hardware is miserably underpowered and it is real-time, by that I mean I can't defer, schedule for later, or de-prioritize anything.
---
To actually contribute, I really like Elixir. I am not yet sure why I would advocate for it over something more 'simple' like nodejs (My background is, accidentally, Javascript World) but it's certainly a very nice language to write in. It feels magical but not too magical where you get scared it's trapping you into its web.
Before anyone jumps too much on me for it, I gauge "simplicity" by how many people can I hire to write it. You can barely swing a cat without hitting 3 competent Javascript developers. I tried for many years to hire another golang dev so I could write it professionally, I only encountered a few despite having been in most interviews my employers would do. With that said, it may just be that the Venn diagram between "writes Javascript" and "writes golang" is small.
Interesting, Elixir should scale far more than that. Are you doing a lot of non-io processing or computations? I run Elixir on raspberry pi 4s doing IoT and they easily handle say generatings graphs with hundred of thousands data points.
One possibility is you're using a single process instead of parallelizing things. For example, you may want to use one process per event, etc. Though if the hardware is very underpowered and say single core, I could see it becoming problematic.
> Are you doing a lot of non-io processing or computations?
Unfortunately.
From metrics, computing AWS signatures takes up an absurdly large amount of CPU time. The actual processing of events is quite minimal and honestly well-architected, a lot of stuff is loaded into memory rather than read from disk. There's syncing that happens fairly frequently from the internet which refreshes the cache.
The big problem is each event computes a new signature to send back to the API. I do have to wonder if the AWS signature is 99% of the problem and once I take that burden off, the entire system will roar to life. That's what makes me so confused because I had heard Erlang / Elixir could do on the scale of significantly more per minute even with pretty puny hardware.
One thing I am working on is batching then I am considering dropping the AWS signatures in favor of short-lived tokens since either way, it's game over if someone gets onto the system anyway since they could exploit the privilege. The systems are air-gapped anyway so the risk is minimal in my opinion.
> One possibility is you're using a single process instead of parallelizing things. For example, you may want to use one process per event, etc.
This is done by pushing it to a task ie: `Task.Supervisor.async_nolink`? That's largely where I found my gains actually.
It took a dive into how things schedule, because a big issue that was happening was the queue would get massively backed up, and I realized that I needed to apparently toggle on a flag telling it to pack the scheduler more (`+scl true`). I also looked into the wake-up lengths of threads. I am starting to get my head around "dirty schedulers" but I am not entirely sure how to affect those or how I can besides it doing it forever me.
The other one just for posterity is that I believe events get unnecessarily queued because they don't / didn't have locks. So if event A gets queued then creates a timer to re-queue it in 5 minutes, event A (c|w)ould continue to get queued despite the fact the first event A hadn't been processed yet. So the queue would just continue to compound and starve itself.
I don't know the specifics of your app so I don't feel commenting in more than generalities, but generally speaking, if you are doing work in native code, and if that native code work is CPU-bound (roughly, more than a millisecond of CPU time) you should try to do it in a dirty scheduler. If you don't, what will happen is that that CPU-bound code will interfere with the "regular" BEAM schedulers, meaning it will start to interfere with how the BEAM schedules all of the other work in your app, from regular function calls to IO to job queuing to serving requests, and whatever else.
I'm also suspicious of the `+scl true` setting as maybe being a bit of a red herring. I've been using BEAM off and on for 10 years both professionally and as a hobbyist and I've never used this myself nor seen anyone else ever need to use it. I'm sure there are circumstances where someone, somewhere has used this, but in a given Elixir app it is extremely likely that there is lower-hanging fruit than messing with scheduler flags.
In terms of queuing, are you using Oban or Broadway or something hand-built? It's common for folks to DIY this kind DAG/queuing stuff when 99.9% of the time using something like Oban or Broadway would be better than hand-rolling it.
It looks like others have address the first 90% of your post, so I'll refrain from commenting on that. I am curious about your timer code, though, because the timer shouldn't be firing at all unless the task associated with it has completed successfully. You shouldn't run into an issue where a timer is re-queueing the same task in Elixir.
> From metrics, computing AWS signatures takes up an absurdly large amount of CPU time. The actual processing of events is quite minimal and honestly well-architected, a lot of stuff is loaded into memory rather than read from disk. There's syncing that happens fairly frequently from the internet which refreshes the cache.
Oh, sounds nice! Caching in Elixir really is nice.
Okay, that makes sense. Elixir isn't fast at pure compute. It can actually be slower than Python or Ruby. However, the signatures likely are NIFs (native code). If the AWS signs are computed using NIFs then the CPUs are likely just can't keep up with them. Tokens would make sense in that scenario. But you should check the lib or code you're using for them.
> The big problem is each event computes a new signature to send back to the API. I do have to wonder if the AWS signature is 99% of the problem and once I take that burden off, the entire system will roar to life. That's what makes me so confused because I had heard Erlang / Elixir could do on the scale of significantly more per minute even with pretty puny hardware.
Yeah, crypto compute can be expensive especially on older / smaller cpus without builtin primitives. Usually I find Elixir performs better than equivalent NodeJS, Python, etc due to it's built in parallelism.
Also one thing to lookout for would be NIF C functions blocking the BEAM VM. The VM can now do "dirty nifs", but if they're not used and the code assumes the AWS signs will run fast, it could create knock on effects by blocking the Beam VM's schedulers. That's also not always easy to find with Beams built in tools.
On that note, make sure you've tried the `:observe` tooling. It's fantastic.
> One thing I am working on is batching then I am considering dropping the AWS signatures in favor of short-lived tokens since either way, it's game over if someone gets onto the system anyway since they could exploit the privilege. The systems are air-gapped anyway so the risk is minimal in my opinion.
> I also looked into the wake-up lengths of threads. I am starting to get my head around "dirty schedulers" but I am not entirely sure how to affect those or how I can besides it doing it forever me.
Note that dirty schedulers really only affect NIFs which run longer than what the BEAM schedulers expect. I mentioned that in regards the possibility that the AWS sigs are taking longer than they should, then they'd cause havoc on the scheduler.
Once upon a time I needed to do hashes en masse for a specific blockchain projects. Just a tad of Rust (via nif) really helped the performance. Might be of help to you, check this out (not my lib)
This fits my experience too. I've been able to crush elixir apps pretty easily, cowboy and phoenix.
I actually think Elixir really doesn't have great performance. TechEmpower, which is IMO the most real world standardized tests out there shows that Phoenix doesn't even complete. And Elixir+Plug+Ecto performs worse than Rails, which is an entire framework.
Everyone in Elixir land tells me "Oh those benchmarks don't matter". Yet they are heavily talked about, and referred to here and other places. They only don't matter if you perform terribly on them I suppose.
And they say "Oh we didn't care to put much effort into it", Yet Jose Valim himself tried to work on it and didn't fix it. He's written extensively about how this type of test doesn't really fit elixir, etc, but ultimately it's just doing DB queries, why does this not work?
I really think Elixir is mostly propaganda at this point. It's a huge mental paradigm shift and I have seen myself that it wasn't performant, and as you said you keep thinking "Oh I must be doing it wrong".
I just cannot fathom why anyone in a decent sized company would use with all the negatives it has going for it. YMMV
> And they say "Oh we didn't care to put much effort into it", Yet Jose Valim himself tried to work on it and didn't fix it.
This is inaccurate. I have started looking again into solutions only last week [1]. My suspicion was always the database pool size was too small but, when I tried to contribute 4+ years ago, fine tuning was hard because it took too long, so I didn't pursue it further [2].
My discontent with the benchmarks is that they are not measuring what people effectively run in production. Since you mentioned Rails, here is how a Rails application looks like:
But almost nobody runs a Rails application like that in production.
And then you look at the configuration of most stacks and they have been explicitly fine tuned to match exactly the concurrency traffic and patterns that the benchmark sends through. But, for most web applications, your web traffic is not homogeneous and you don't have a predetermined number of concurrent requests.
I still believe those benchmarks are not indicative of what you will actually see in production. Most companies who have gone from Rails to Phoenix, for example, report a 10-20x reduction in operation size and costs. But it is clear at this point people put way too much stock on these benchmarks. The irony of it all is that, if someone copies these setups into their actual applications, they will most likely perform worse. Oh well.
[2]: https://github.com/TechEmpower/FrameworkBenchmarks/pull/5432 - here you can see me increasing the database pool to 40... but most benchmarks today run with 512-1024 connections (which, once again, is most likely not what you would do in prod). In any case, we need to bump our numbers accordingly.
The techempower repo is literally a gift to the world. I've seen and learned many ways of optimizing code ...almost none of which should ever be used in production or would ever make it past a PR at most companies. Still, it's amazing in the amount of non-realistic approaches used. It really is a work of art.
You are right. There are many positive things about TechEmpower and keeping it open to all and running takes an incredible amount of effort, which is definitely appreciated and I failed to recognize. That was unfair of me.
Although I still strongly disagree with the initial observation that they are "real world". Most applications in the real world receive heterogeneous traffic, from a varying and unknown number of clients, with logging, potential contention on resources, and other aspects that are not taken into consideration.
I hate when people downplay TechEmpower. They are the best benchmarks going, and awesome because anyone can contribute and as you said there is a lot of learning that can be done there.
Sounds like you are agreeing with me. You tried to work on it years ago and didn't fix it. Even when it was completing last year it's performance was half of what Rails was, which isn't known to be fast [1].
The database pool needs to be at 512 to 1024? Yet Rails seems to get away with far less than that and still have better performance.[2]
Even if, somehow, Elixir is actually faster, which seemingly lacks evidence at this point other than some anecdotes by people who have a vested interest in people adopting the language (primarily Elixir dev shops and framework/language authors), the fact that the community of Elixir devs cannot get decent results on these tests says a lot.
If these people can't get good results in tuning Elixir, who can?
Not a surprise to see people ITT also noting that they ran into the same types of problems that the tests reveal.
> The database pool needs to be at 512 to 1024? Yet Rails seems to get away with far less than that and still have better performance.[2]
Rails concurrency model is per process. So you will have `n_workers` pools and the amount of connections will be `n_workers * 3` based on the code you shared. The production machine has 28 total threads [1], which means Rails is starting 35 workers, according to their auto tuning code [2]. Overall, Rails uses 35x more pools than the Phoenix application and has twice the number of connections (105). It is absurd to imply that Rails is actually running with far less.
Second of all, the database pool needs to be 512 or 1024 if your app server can push traffic quickly enough to the database. If you are increasing the Rails pool size and it is not getting faster, then Rails is your bottleneck. Based on the runs I did last week, it seems the pool is the bottleneck in Elixir's case. We will know for sure in a couple weeks once the PRs land.
> If these people can't get good results in tuning Elixir, who can?
If you want to compare with Rails in particular, we beat Rails in every benchmark except two:
1. Single query: which will be tackled by better sizing our pools
2. Plain text: where we are currently sending the wrong payload (but when we sent the proper payload, as in the last published round, we beat Rails too)
And it does not include any of my proposed changes yet.
My point is exactly that the feedback cycle is long, so it takes time, and I was not able to invest time before. But given how intent some folks are to misinterpret benchmarks (and assign malice to the whole community), it is probably better to continue looking into it.
Nice to see elixir put up decent benchmarks for once. It looks like it's about 80 percent faster. Still a far cry from "10 to 20x reduction in size and cost".
I wrote, while learning, a monitoring system in Elixir. I put everything in one "Phoenix" app. What does it do?
* Query about 60 oracle databases in parallel every five seconds.
* Query a number of KDB databases every five seconds.
* SSH to dozens of machines every ten seconds
* Ingest statistics from icinga.
* Pull ticket updates from Jira every couple of seconds.
* Provide facilities for connecting in to screen on about 100+ remote machines, and interacting with their interfaces.
* API access to internal and external commands/functions.
* more.
It does much more than this, while serving live-view sessions to users, on a dual core machine (CPU: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz x 2), while averaging about 8% CPU for everything it does.
The SSH component replaces a python version that does the same thing. The Python instance, by itself, pushes a 6 core box (same xeon as above but with 6 cores, rather than 2) to 100% CPU, and spaws close to 3,000 OS tasks.
Many of the other checks that are running in that 8% CPU Elixir process replace python, node, and java checks, each of which required more ram, demanded more CPU, and were spread across eight to ten dedicated machines per check.
We have been retiring machines, and reducing CPU load, so I'd say we do see the 0 to 20 reduction.
Now, my solution might have been written better, but that was what I came up with while learning Elixir.
YMMV. In benchmarks like cached queries, it already outputs a solid 5.5x faster than Rails, and it is worth remembering that the Rails app is way too stripped compared to what folks are actually running in prod (which was one of my original criticisms).
Disclaimer: I've written Erlang but not Elixir. From what I can tell the issues are similar though. I'd say look at Fred Hebert's Erlang blog (ferd.ca) and "Erlang in Anger" for some tuning hints. Erlang has been around a lot longer and its devs have gotten more into nitty gritty issues, it seems to me.
I wonder how far into generative AI will have people developing "intractable FOMO" where they see a beautiful place but then they find out they can't visit it because it doesn't exist.
Are we seeing it already from people who are getting irrationally angry about generative AI?
That is fair. I am not smart enough to shape it out but I wonder if there is a difference between going into a reality expecting it to be fake and being unknowingly presented a fake reality.
Maybe I am just weird but I have definitely had at least one situation where a book got its hooks into me and my brain would be thinking about events in the book like I would real events. I forget exactly what parts caught my brain but I had a few times where I had to remind myself the book is fiction when it would pop into my head.
I block a lot of this stuff on social media. I don't want to develop feelings about places or people that don't actually exist, I think it's psychologically unhealthy.
A bunch of the people images are very clearly AI though. I’d wager about 30-50% of them could be recognised as generated by people with a bit of understanding of how these models work.
nobody knows how these models work. Not even experts. These are black box algorithms where people understand these things in terms the analogy of a best fit curve in a series of data points. Outside of this analogy... nobody understands how generative AI works.
What made a model for a specific situation choose to generate a hand with 6 fingers instead of 5? Or 5 instead of 6? Nobody knows.
Even before generative AI was an issue, in the US you couldn't just tell a court "this is a photo, therefore it is evidence". There had to be witnesses would could testify about how that photo was produced, and the other side could cross-examine.
But yes, you're already seeing politicians caught doing or saying embarrassing things claiming it's all a deepfake, but in most prominent cases of this there were lots of witnesses who can confirm that yes, they did say that.
If you allow these mistakes to be possible, they are inevitable. If you take basic precautions, you'll probably be fine.
I'd rather take a well-curated and trimmed down .env over a poorly-configured secrets manager that gives away the entire farm when the single secret leaks. Security isn't a single thing nor bolstered by switching a single method of how you store your secrets.
The problem is not taking precautions to prevent leaks from happening, not how you are managing your secrets. If your threat model begins or is imminently "when the attacker is logged in as root", just post your stuff on a public bucket to get it over with.
I once worked at a place where we essentially were unable to fix bugs because it was a no-code tool so people would do all kinds of wacky things with the system.
The mandate was, as far as I can tell, to enshrine these former bugs into actual documented features. I wish I had seen this as a warning for how the place was ran but I also needed the money.
That was on top of people doing completely stupid things and abusing the systems we gave them in exchange for way too little money. It got to the point where the company was offering dedicated servers as an upsell because there were clients that were paying far too little money, in my opinion, for the amount of resources they were taking up.
The breaking point came when I got woken up at 3am two nights in a row by a client who was very obviously an Indian scam call center who was paying us something like $100 / mo to have a button that calls an engineer at any time. The problem was they had something like 7 layers of recursion through their app and each update was causing cascading updates and unsurprisingly was taxing the entire server causing updates to be slow and/or unreliable.
Maybe I just sucked at it but I told leadership we need to jack up prices on that button and kick these pretty obvious scammers off the platform. I left a few months later.
It made my blood boil when the receipt had 3 different fees before we even got to the tip.