Agreed, but you can make it manageable if you do it a lot. First give your CF runner full permissions. Then once you have your CF done and working, take away all permissions from your CF runner. Then add them back one by one as it fails.
You can even build a script to run it, check the errors, update IAM, and then run it again.
Eventually you'll have a set of IAM least privilege rules. It's a pain but it's secure.
I honestly cannot imagine how sophisticated cloud software will be in ten years. So much will be hashed out, that it will be truly like building software with legos. The idea of using libraries or abstractions over infrastructure will grow and eventually mature. I think software wont be automated, but the interfaces we implement for custom logic will get smaller and smaller
It's no secret to homeowners you get money upzoning your parcel in California. They don't care. The whole purpose why somewhere like Ranchos Palos Verdes or wherever else in LA county that fights development and remains a suburban, SFH enclave is because the locals want it that way. They bought in specifically because it was this way. And they will elect officials that fight on their part to preserve it in this way. They don't want to approve split lots in the palos verdes peninsula, that would generate traffic in the peninsula that they don't currently have. They already limit public access to public beachheads as much as possible to nonresidents, over fears of traffic.
In my experience, its because the tools that are given to the dev team are opaque, poorly documented, made for a general case that rarely is sufficient, and usually locked down with some role based access control to the point where, even if I can figure out whats wrong with some pipeline, I rarely have the access to change something. I have an AWS Codepipeline that I can see running, but the logs from each step are sent to a Cloudwatch instance running in another AWS account, which only the devops folks can view. I have a Jenkins pipeline that was cookie cutter from what the devops team offers. If I wanted to actually grok whats occurring during the pipeline, I'd have to jump between 3 or 4 github repos, and also account for the versions/branches on each one.
It really feels worse than back in my websphere days, where in order to see app logs, we had to submit a ticket to the hosting teams and wait 2 days for a .log file to be sent back through email. I'm consistently left wondering if we've actually gained anything.
That's how i felt at first, but getting deeper into the Swin transformer paper it actually makes a fair bit of sense - convolutions can be likened to self-attention ops that can only attend to local neighborhoods around pixels. That's a fairly sensible assumption for image data, but it also makes sense that more general attention would better capture complex spatial relationships if you can find a way to make it computationally feasible. Swin transformers certainly go through some contortions to get there, and I bet we'll see cleaner hierarchical architectures in the future, but the results speak for themselves.
> Remarkable is advertised as an E-Ink reader as well as a writing device. In reality, this only works if you have e-pub books on your hard drive. Where can you get e-pub books? Basically nowhere
What? I'll give you that the two largest distributors, Amazon and Barnes and Nobles don't sell epub but pretty much everyone else who sells ebooks does including Google Books. Also, you can use Calibre to turn any other ebook format into an epub.
First gen transistor computers often used standard functional units - gates, flip flops, and such - packaged into small modules with edge connectors and wired together with wire wrap on a backplane. Like this DEC PDP-8.
Later TTL/CMOS designs replaced the packaged modules with much smaller 74xx/40xx ICs.
You can make basic logic gates with just diodes and resistors, but you need transistors for inversion, buffering, and a usable flip flop.
That's probably the minimum level for useful computing/calculating. If civilisation has ended and you have no transistors you probably don't have the resources to make glass valves either, so that's going to be a problem.
> Yep. I've said it many times here and basically anywhere I can: Clamoring for remote-only work, maximum-introvert, never fly somewhere, don't value in-person communication/meetings/etc is a dangerous game to be playing.
What's interesting is that I flew in for a meeting with my company recently, the first I've had since I've been hired.
This took hours upon hours of my time. Time I could have spent working.
We spent it between a conference room and at restaurants, and much of the time was spent discussing things with the larger team that were not relevant to me. So I spent that time on my laptop fixing bugs, in the conference room.
There was zero benefit to any of this outside of a solid handshake and getting to "meet people in person".
However, I have 3 meetings a week with the entire team on HD video, so I already knew exactly who I was meeting and what their mannerisms and personalities were.
That included walking through an office where people jumped out of the cubicles and offices when they saw me and said "hey! Great to finally see you in person!" without a second thought. They knew exactly who I was without having ever met me.
Is this really necessary? This was a huge waste of company money that I could have spent working.
I need to write that I was slightly mistaken, CA houses half of the “unsheltered” in the United States and only a quarter of the “homeless.” The unsheltered are highly visible and make up 70% of the total homeless in CA, the highest rate in the nation.
I think if you consider the increase in assets (total value of the # of servers, racks, network gear etc) from the purchase of hardware to accommodate that growth, in order to satisfy those revenues and future revenues, it makes sense. It is entirely intentional and desirable for any gross margins to go fully back into growth or assets to support growth - especially in the cloud game where infrastructure costs and R&D are high and front-loaded. Once you purchase a hard drive and place it in a network-connected server, it will cost a few pennies each month to keep it spinning while it will generate revenue for 5 years (or more) after. You can see this by looking at reported Total Assets increasing from $38M in 2019 to $54M in 2020. That is after depreciation mark-down of <2020 equipment which is typically very high for IT equipment (upwards of 33% per year of the original value). The increase in annual revenue over time from previous cohorts is very attractive and shows that customers do appreciate the cost effectiveness and quality of service compared to other offerings.
If you consider they bought & added, net of depreciation, about $16M of assets to their balance sheet in 2020, the loss of 6M doesn't seem terrible. Using the bean counter approach - no new marketing, no spending on growth, no unnecessary R&D - they could have cleared about $20-25M of ebit in 2020.
Deciding to use your gross profit to fuel further growth (especially considering they're compounding it at a healthy rate and doing so without giving up equity or interest) is a wise move. If anything, it probably would have made sense to take some debt or finance more of the hardware purchases early on (as I see they are doing now at some scale as of 2020) in order to put those proceeds towards customer growth.
The only thing I see which is a marginal concern with these numbers is that it's difficult to see the split between marketing spend on b2 customers and backup customers (They spent $8M in 2019 on sales and marketing to generate 42k new customers, and $11.9M in 2020 to add 40k). It is very possible and likely that those marketing dollars went towards new enterprise customers (on b2 storage vs backup) and the revenue-per-customer number increasing during that timeframe seems to support that. Are blog posts calculated in the sales/marketing spend? Seems lots of the customers came from the organic content play.
Very cool to see the company who's free blog post about which harddrives are most reliable (when some had >5% annual failure rate and others <1% for the same price) nearly 10 years ago come around to s1.
As someone who does manufacturing automation for a living, I don't think this person has any experience in a manufacturing environment.
Coding is really not a limiting factor in manufacturing automation. For decades everything has been built around being controlled by first relay logic and then PLCs, which implement multiple languages like ladder-logic which are basically already drag and drop. Most sensors are simple - generally they provide a boolean on/off signal, and occasionally there's an analogue output which is pretty easy to interpret. Physically placing and wiring the sensors correctly is the hard bit, not interpreting the data. There are more advanced systems, like computer vision, but they all already have user interfaces which allow you to interact with them without coding. In fact it's actually a real pain in the ass that in most cases there is no coding option, and thus everything has its own proprietary and arcane method of operation and its unreasonably difficult to get things to talk to each other.
Further, generally coding skill is not particularly lacking. Assembly-like languages such as G-Code are widely used by machines and many tradesmen know enough to hand edit at least simple programs, and the engineers on staff are generally comfortable with more advanced programming. While they're a far cry from software developers, but it's sufficient for the relatively simple cases that a no-code solution might be suitable for.
The main issue for automation in manufacturing is not versatility but reliability. A few hours of downtime can cost tens of thousands of dollars, and a machine crash might easily cost 8 figures, to say nothing of the potential for injury or even death. Whatever difficulty there is in coding for manufacturing environments is in structuring these programs such that they are consistently accurate and fail safe. Personally, I don't find anything that makes it easier for someone to tell a machine to do something stupid particularly appealing.
Node is hated because programmers are highly opinionated (not in a negative way). Everything with huge usage gets hate, because everything has flaws and trade offs. People get forced to use it at work, sometimes to solve the wrong kinds of problems for it's strengths, which amplifies the dislike. It's the nature of the beast.
I feel this misses a larger point: a large chunk of valuable OSS work isn't about innovation, but about maintenance: keeping the same things working in a developing ecosystem.
As long as we keep framing all software development as "innovation", there will never be enough money for the infrastructure underpinning the real innovative work, and that both makes it appear that innovation is much more expensive than it really is, and that software development is somehow maintenance-free.
> I know one hard science PhD who runs their own K8s cluster at home and plays with Linux distros.
That's super awesome for that data scientist, but the question for a business is can/should you structure yourself in such a way that you NEED employees with that cornercase level of joint expertise.
The answer is you really can't. Individuals have awesome strengths that they developed for reasons particular to them. Use those strengths when you can. But the business has to rely on a common denominator of a role or else it'll never fill it when their unicorn leaves to go backpacking in Europe.
I'm honestly surprised this is on HN, but it's good that it is.
I worked with Nic on and off for almost his entire tenure while I was CTO for Kessel Run and I can state with full confidence that this is at best him mis-representing his importance and the problems with the DoD IT; and at worst this is his attempt to spin his being fired (or being asked to resign ala Nixon) by the incoming Secretary (timing here is not just a coincidence).
A couple of core points, that are important to keep in mind that have nothing to do with Nic's character, integrity, communication style or technical capabilities (which is a separate and important topic but not suitable for this public forum IMO):
- The CSO position was made up by him, it's not related to any GSA Schedule and it had about the kind of charter you would expect for the position: Namely ill-defined and loosely empowered.
- There was no office of the CSO in the sense that it was not congressionally funded, had no budget, no personnel and no real authority for writing, implementing policy or actually doing engineering.
- Nic never held a clearance, and as a result was never actually involved or aware of most of the programs that he intended to impact
- His primary mission seemed to be to push any organization that was developing software for the USAF to immediately adopt microservices architectures, containers/kubernetes and a couple of very specific "DevSecOps" practices - and specifically to the specifications that he approved/suggested. Make of that what you will.
That said, a lot of what he says is true and IT/network infrastructure, development and test etc... in the DoD is far from modern and in some places completely broken. Other places, where it matters a lot it's like nothing you've ever seen or will likely see in the commercial sector for decades.
Bottom line, I suggest taking this tirade with an EXTREME amount of salt.
The trouble is whom and how to ask. The submarine world is a giant, international cat and mouse game, and asking around gives away operational details that may involve compromising other things (e.g., special operations deployments) if you say "at exactly X coordinates at time T we bumped into something. Anybody know?"
Also, since the dawn of militaries, much of what occurs is posturing. You want the other side to think you are big, bad, competent professionals. "We ran into something and we don't know what" makes you less superhuman in the eyes of the public and potential adversaries.
The fight isn't the issue I referenced, not fully. A statute cannot change the constitution and alter voting requirements of the legislature. That is what this attempted. As you mentioned, a different threshold is required in order to pass an amendment or a statute.
The author wanted the benefits of an amendment but to pass it by meeting the requirements for a statute.
Wikileaks disinfo has almost always been in deeply misleading and sometimes explicitly false editorializing and framing of leaks rather than the content.
They also post a lot of misinformation to their Twitter account - falsely claiming Bob Beckel was a Clinton staffer, boosting the false Seth Rich story, boosting bogus Clinton health claims, and much more besides.
They're very biased, and their biases come out in what they choose to publish, what they refuse to publish, and how they choose to editorialize, and in one known case in selective editing of a leak, but in general the leaks themselves aren't "false information" but are documents which they're spreading to create a specific narrative, and in several cases like the Syria, Podesta, DCCC, and DNC leaks it's clear they're serving as a front for Russian intelligence agencies to spread a specific narrative to aid Russian goals rather than further the truth. Since they serve as a front for foreign intelligence agencies, it's totally possible they're getting drops from other intelligence agencies with similar agendas.
You actually don't even need the HTML scaffolding for that, and can author a js-sequence-diagrams diagram straight into a text file, append a simple script to render the document, and save as .html! Example: https://unpkg.com/browse/js-sequence-diagrams-autorenderer@1... - click on "view raw" to see it in action.
Looks a lot cleaner, and the .html itself is a valid diagram as the script tag that bootstraps the renderer is prefixed with a comment hash.
EDIT: I used to have this in a gist that I'd load via rawgit.com, but since that's no longer active, I figured I'd update my script and make it publicly available through unpkg :)
This is an extremely low volume prototype run. You can get those scheduled on short notice. Fabs love them because they can do process optimization using them, without impacting production customers. They're ridiculously expensive per-die and you commit to accept a much higher failure rate than normal.
ST can and is making microcontrollers. It's just that they've sold their production for a year ahead, before it's even been manufactured. Car companies fucked everyone over by flipping a large volume of orders back and forth causing bullwhip effect on the whole industry, and lots of knock-on effects in other industries who suddenly got told (occasionally too late) that they need to plan their inventory a year ahead because they can't get anything at short notice anymore. Car companies vehicle production volume is tens of millions, but each vehicle has thousands to tens of thousands of ICs. The six months you are mentioning are not the capacity period, they are the lead times involved.
Apparently he used Sentinel-Hub, which is a service I'd never heard of. I just signed up to try their service but apparently the free plan advertised on their pricing page is a scam. There's no such thing. It seems you can't use the service without paying.
To expand on just how absurdly narrow courts interpret "clearly established", there's this example from the wikipedia page on qualified immunity[1] for eg.
> Critics have cited examples such as a November 2019 ruling by the United States Court of Appeals for the Sixth Circuit, which found that an earlier court case ruling it unconstitutional for police to sic dogs on suspects who have surrendered by lying on the ground did not apply under the "clearly established" rule to a case in which Tennessee police allowed their police dog to bite a surrendered suspect because the suspect had surrendered not by lying down but by sitting on the ground and raising his hands.
The net effect of this appears to basically be that it's impossible to prove literally anything pierces qualified immunity unless it was ruled on prior to the establishment of qualified immunity as a defense. How exactly can you create precedent when you require nearly identical precedent to set it? It's a blatant catch-22. Like, literally something that could have been in the book Catch 22.
There's also some interest in making eBPF the standard for computational storage [1]: offloading processing to the SSD controller itself. Many vendors have found that it's easy to add a few extra ARM cores or some fixed-function accelerators to a high-end enterprise SSD controller, but the software ecosystem is lacking and fragmented.
This work may be a very complementary approach. Using eBPF to move some processing into the kernel should make it easier to later move it off the CPU entirely and into the storage device itself.
> No, open source means the same thing it's always meant since the term was first coined. See the Open Source Initiative's Open Source Definition: https://opensource.org/osd.
Problem: The OSI did not coin the term 'open source'. OSI partisans claim that Christine Peterson coined the term at a strategy meeting in Palo Alto on 3 February 1998. However, the term and the concept was well known prior to that. Martin Tournoij does a decent enough job of collecting prior citations [1] that go all the way back to 1990. All the OSI did was take an existing philosophy, scribble some new restrictions in crayon, and called it Open Source(tm)(c)(pat. pending).
Honestly, though, I do love it when this comes up. It gives me the opportunity to irk new guys telling them that Lyle Ball, head of public relations at Caldera, has an earlier citation than the OSI in the form of a press-release announcing Caldera OpenDOS[2][3]. :D
We likely won't, at least nowhere near the 20th century balance. For all the new tree saplings planted elsewhere, Brazil straight burns down old growth in the rainforest. For all the wind and solar installed by the rich countries, poor countries are happy for the cheaper coal. Replacing nuclear base power with fossil fuels doesn't help too.
Even if we ever do manage to come together and fund active carbon capture, it will take decades to reverse the trend. And then possibly centuries for the oceans to de-acidify. Together with active efforts to burn down nature in the places where most biodiversity remains, the mass extinction even is likely to run its course before we get to a stable ecosystem. Sure we can restore most of the biomass, but it will be much fewer species. And letting the nature run its course will, naturally, take millions of years.
Wasm itself is at the level of assembly, not the level of JavaScript. If you want to compile a GC'd language to wasm, you have to bring your own garbage collector and compile it to wasm as well. That's not a showstopper from wasm's perspective, and several languages have done just that, but porting a GC to a new compilation target can be a bit of work.
You can even build a script to run it, check the errors, update IAM, and then run it again.
Eventually you'll have a set of IAM least privilege rules. It's a pain but it's secure.