Hacker News new | past | comments | ask | show | jobs | submit | MatthiasPortzel's comments login

Yep. I heard someone at Microsoft venting about management constantly pleading with them to use AI so that they could tell investors their employees love AI, while senior (7+ year) team members were being “randomly” fired.

> Does rollback introduce any performance overhead that would skew my results?

I would expect it to be the other way around—since the transactions are rolled back and not committed, they would have significantly less performance impact. But I’m working from an academic model of the database.


> The remedy order should also prevent Google from entering into exclusive agreements to access AI training data…

Google, for example, bought exclusive access to Reddit's data. No one else can train on Reddit unless you have more money than Google (you don't). So one of the asks is that that sort of exclusive deal be prevented. If everyone is allowed to buy Reddit's data, and Google makes the best model, that wouldn't be a problem.


> So one of the asks is that that sort of exclusive deal be prevented.

Thank you, that actually sounds reasonable.


This requirement is the result of EU regulation.


For reference, the regulation you are probably referring to is Article 30[1] and Article 31[2] of REGULATION (EU) 2022/2065 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act).

Article 30 requires capturing and vaguely defined validation of the following information supplied by a trader (includes traders of software):

- the name, address, telephone number and email address of the trader;

- a copy of the identification document of the trader or any other electronic identification as defined by Article 3 of Regulation (EU) No 910/2014 of the European Parliament and of the Council;

- the payment account details of the trader;

- where the trader is registered in a trade register or similar public register, the trade register in which the trader is registered and its registration number or equivalent means of identification in that register;

- a self-certification by the trader committing to only offer products or services that comply with the applicable rules of Union law.

Article 31 requires at least the following trader information to be displayed to potential buyers:

- name;

- address;

- telephone number;

- email address;

- clear and unambiguous identification of the products or the services;

- information concerning the labelling and marking in compliance with rules of applicable Union law on product safety and product compliance.

[1] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...

[2] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...


It's Google's decision to enforce it worldwide. I'm not in Europe, and most of my apps' users were not in Europe.


It's really hard to know that for sure. Why risk antitrust lawsuits or European fines because you tried to do the bare minimum?


Are you sure Europe wont sue you for europeans using it with a VON or europeans outside europe using it? Because I am not sure they wouldn't sue.


[flagged]


Do you think I somehow personally chose where my apps would be more popular or less popular? If they wanted to cut off my apps in only European regions due to European regs it would be disappointing but understandable.


> disappointing but understandable.

It's amazing to me that there are some people that will go to these lengths to defend the profits of one of the largest corporations in the world.

At no point does it even occur to you that Google are already bending you over a table with their cut, and you're already white knighting for them even in a completely hypothetical situation.

Do you have very strong investments on Google? Otherwise, I really can't explain why an entrepreneur would ever think the way you do.


I agree with you, but in my particular case there was nothing for Google to take a cut of. I never sold anything or even took donations.


Oh, my bad

For some reason I thought you sold an app or a service.

Serves me well for assuming, my apologies.


> someone who doesn’t know or care how a system works shouldn’t be prescribing what to do to make it secure

The part that’s not said outloud is that a lot of “computer security” people aren’t concerned with understanding the system. If they were, they’d be engineers. They’re trying to secure it without understanding it.


Good computer security people are engineers.


Yes, this is only a replacement for the Homebrew CLI. It doesn’t have its own package repository and moreover it doesn’t have the ability to build packages (yet)—it’s just downloading and installing the binaries built by Homebrew.


> I want Weird to be my digital ‘soul pod’. It’s the digitized me; the source from which all of my other digital being springs forth.

I recommend working on your elevator pitch, because while strangely eloquent, that is utterly incomprehensible.


The quirky sensibility coming through in the copy might appeal to the demographic interested in maintaining personal digital gardens rather than a slick SaaS product. It does too me!


Not disagreeing with the thrust of your post though I would take issue with it being both "strangely eloquent" and "utterly incomprehensible". It's just awkward writing... which, for a platform called WEIRD, is perhaps the point.


It's crazy that this post seems to have stumbled across an equivalent to the Copy-and-Patch technique[0] used to create a Lua interpreter faster than LuaJit[1]

[0]: https://sillycross.github.io/2023/05/12/2023-05-12/ [1]: https://sillycross.github.io/2022/11/22/2022-11-22/

The major difference is that LuaJIT Remake's Copy-and-Patch requires "manually" copying blocks of assembly code and patching values, while this post relies on the Go compiler's closures to create copies of the functions with runtime-known values.

I think there's fascinating processing being made in this area—I think in the future this technique (in some form) will be the go-to way to create new interpreted languages, and AST interpreters, switch-based bytecode VMs, and JIT compilation will be things of the past.


It’s not really copy and patch, the whole point of the copy patch is so you can inline that in your compilation output and it’s a fast baseline interpreter because individual builtin functions are optimized (via the compiler output you’re copying from) and inlined (which is why you need to patch to update what registers are being used. In this model you jit only control flow really, then inline the implementation of each bytecode operation (in contrast to sparkplug [https://v8.dev/blog/sparkplug] which just calls a builtin instead of copy/patch). It’s still JIT which is vastly different than an interpreter.

> JIT will be things of the past

Sorry no. JIT is not going anywhere. They mentioned in the article JIT would be better performance just more effort than they wanted to put in (a good tradeoff!) JIT powers Java, Wasm and Javascript VMs and are certainly the way to get the fastest code because you can give the CPU code that it can do a much better job predicting the next instruction. With interpreters you’re often limited by the indirection of loads when looking up the next function to call, and generating code for the control flow outside calling your “builtins” is precisely what Sparkplug is doing.

At the end of the day, like most of engineering, choose the right tool for the job, which in this case is simplicity (which is often the right choice!), but that doesn’t mean it’s always the right choice. For example if browsers did this then Javascript performance would tank compared to what we get today.


The JVM has had a template interpreter since the mid-90s, it’s not anything new, and template interpreters are only sufficiently performant as to provide acceptable execution speed until you JIT.

Template interpreters are not a substitute for real JIT — JIT compilation isn’t going anywhere.


My understanding of most optimizing compilers is that this is an extremely common "last step" sort of optimization. A lot of the optimizing work is beating the code into a canonical form where these sorts of templates can be readily applied.

It was also my understanding that that's also the point of "super optimizer"s [1] which look for these common patterns in something like LLVM IR to generate optimization targets for the mainline optimizer.

[1] https://en.wikipedia.org/wiki/Superoptimization


Faster than LuaJIT's interpreter, not the compiled code.


> It's crazy that this post seems to have stumbled across an equivalent to the Copy-and-Patch technique[0] used to create a Lua interpreter faster than LuaJit[1]

> this post relies on the Go compiler's closures to create copies of the functions with runtime-known values

To be clear the technique of using closures like this is ancient in the world of LISP. You can see in Paul Graham's books on LISP from the 90s, and in LiSP in Small Pieces, and many interpreters of 80s/90s vintage. I would say that it is quite standard.


> switch-based bytecode VMs

I am finding a switched byte interpreter to be very expedient on my computer. It seems that if the # of cases is kept small enough, your chances of getting a good branch prediction can go up substantially. Something like a brainfuck interpreter runs extremely fast. In the worst case of randomly guessing, you are still going to time travel with a 12.5% success rate.


> In the worst case of randomly guessing, you are still going to time travel with a 12.5% success rate.

Random guessing clearly isn't the worst case then, a bad prediction can miss every single time.


The claim is that these models are training on data which include the problems and explanations. The fact that the first model trained after the public release of the questions (and crowdsourced answers) performs best is not a counter example, but is expected and supported by the claim.


I am firm believer in the right to free speech and the importance of expressing ideas that are contrary to the general cultural attitude.

That’s why I turn on the Show Dead setting on HN, and I love that HN has that feature.

Save your upvotes for people positively contributing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: