Hacker News new | past | comments | ask | show | jobs | submit | friendzis's comments login

> During my use of AI (yeah, I don't hate AI), I found that the current generative (I call them pattern reconstruction) systems has this great ability to Impress An Idiot

I would be genuinely positively surprised if that stops to be the case some day. This behavior is by design.

AS you put yourself, these LLM systems are very good at pattern recognition and reconstruction. They have ingested vast majority of the internet to build patterns on. On the internet, the absolutely vast majority of content is pushed out by novices and amateurs: "Hey, look, I have just read a single wikipedia page or attended single lesson, I am not completely dumbfounded by it, so now I will explain it to you".

LLMs have to be peak Dunning-Krugers - by design.


> > It gets massive amounts of products and services enabling the US residents live well beyond their means.

> What does this mean really? That is their means.

The argument presented here is that economic growth (more specifically trade volume increase) outside USofA forces USD acquisition transactions with USofA. This means that there is constant surplus of goods flowing into USofA without accompanying surplus of circulating money supply, leading to artificial deflation.

In other words, the cumulative productivity, measured in USD, of USofA is lower than cumulative outside-USofA-fair-market value of goods transacted in USofA. This effect increases gross value on supply side without balancing out gross value on demand side, allowing domestic players larger transaction volumes than their total productivity, with deficit covered by the central bank.


Your comment falls into the engineering superiority trap. Yes, one needs to understand how camera (or many other instruments) works, but only because different tweakable parameters are not completely orthogonal.

> The camera is the artistic instrument of cinema and using it well requires understanding how to leverage lens selection, aperture, shutter speed, exposure, focal plane, lighting, framing, etc to achieve the desired artistic outcome.

This is the key sentence. If you had a digital camera that perfectly mimicked output of film camera, you could take a "filmmaker" from 70s, give them the digital camera and they will successfully create a film of equal quality. Yes, one needs to understand that e.g. focal length changes depth of focus, but it's all about controlling the output. One does not really need to understand the inner workings of a system, all they need to understand is which parameters affect output in what way.

>> there is probably not much of a predictable relationship between knowledge of how the camera works and quality of the resulting film.

Yes, usually some technical understanding is required to understand the relationships and use them well. However, even perfect understanding of inner workings of a tool does not translate to being a good craftsman. There is some overlap, but that's it. One still needs to understand the filmmaking part well to make a good film. Hence, the observation that technical knowledge does not translate to film quality - it's a required, but not sufficient criterion.


> all they need to understand is which parameters affect output in what way.

Yes. That was the point I was trying to make and after reading the responses, I see I didn't make it as clearly as I could have. I think the confusion is in the multiple ways to interpret this phrase in the original article, "knowledge of how the camera works." I thought the examples I cited (lighting, lens, aperture, etc) would make clear how I took that phrase but they didn't do so sufficiently.

I'll give an example. In the example I'll substitute violin for camera again because I think it helps to remove some of the technical nuance specific to cameras. I took "knowledge of how the violin works" to include "knowledge of how to..." apply finger pressure on the strings, rock the strings to create vibrato, use bow strikes, apply rosin - all to achieve. I did not take "knowledge of how the violin works" to mean things like: the effect of internal geometry on acoustic resonance. To me, those are "knowledge of how to build the violin" which I already said wasn't necessary to be great violinist (although the example of 'geometry -> acoustic resonance' is more 'designing a violin', I group design as part of 'building'.

I know see several people took the phrase differently when that interpretation wasn't the point I was trying to make. As a filmmaker, "knowledge of how the camera works" means knowing how to apply lighting, lens, aperture, shutter speed, etc to achieve the desired artistic result. I didn't mean "knowledge of how the camera works" in the sense of "the impact of pre-charge voltages on charge coupled devices", which to me is akin to "the effect of internal geometry on acoustic resonance" in a violin. As I said that's designing a camera/violin, which is part of "knowing how to build it" not "knowing how it works". "How it works" is simply too open to interpretation.


> If you had a digital camera that perfectly mimicked output of film camera

You must first prove it's at all possible.

And they could of course not do any of the special effects obtained by compositing images on the same film. And they would have no idea how to do that with a computer.

It's not the same at all.


> If you had a digital camera that perfectly mimicked output of film camera, you could take a "filmmaker" from 70s, give them the digital camera and they will successfully create a film of equal quality. Yes, one needs to understand that e.g. focal length changes depth of focus, but it's all about controlling the output. One does not really need to understand the inner workings of a system, all they need to understand is which parameters affect output in what way.

This paragraph tells me you don't understand how cameras work, because what you said is not true. The study of photography, and from it cinematography, is FUNDAMENTALLY about understanding the relationship between light, color, and the camera. This dates back to the beginnings of photography, and is the /primary/ topic written about by many of the world's best photographers (and famously a key area of focus for Ansel Adams). Photography, and from it cinematography, is almost entirely about lighting and exposure, and it requires a deep technical understanding of the inner workings of the camera to do it at a professional level. A cinematographer working on feature length films is not an amateur recording video clips, every element of the frame affects the mood and context of the story that is being told, and controlling for light and exposure is essential and goes beyond merely adjusting settings on the camera until it looks good in the viewfinder, it requires adjusting the actual environment you are recording (e.g. artificial lighting, light control, shadowing) in conjunction with technical details of the camera system, the lens choice, and things like adjusting aperture.

To have no understanding of the basic principles of light and its relationship to the camera, which is the core principle of a camera's operation, makes it impossible to produce professional quality work.


Addresses and names are nice, well-known examples for cross-domain data. It's not that attempts at normalizing these structural datums create problems per se, but rather there is no single true normalization, therefore wrong normalizations start causing problems.

Yeah, normalizing inherently introduces constraints on the data. For example, normalizing to first and last name implies no middle names/having a first and last in the first place.

Also, first and last names depend on the culture. Oh and people can have more than 1 name (as in distinct names, rather than multi part names. Some cultures use different names with different social circles).

Easier to just let them put their preferred name into a freeform text field.


Very importantly, the value can be adapted to its purpose. A person's name can be very different if it is to be used for credits in a publication, for a formal wedding invitation and guest list or for shipping to a specific address.

> Joins, lookups, indexes.

You both want to control these values within your database engine, at least so that they are actually unique within the domain, and there is no real reason for it to be user-controlled anyway, as they are used referentially.

> Idempotency.

User supplied tokens for idempotency are mostly useful within the broader context of application sitting on top of database, otherwise they become subject to the same requirements internally generated ones are, without control, which is a recipe for disaster.

> Sharing

Those are the same idempotency tokens from previous points with you as the supplier. In some cases you want to share them across prod/stage/dev environments, in some cases you may want to explicitly avoid duplicates, in some cases you don't care.

All these use cases are solved with mapping tables / classifiers.

Example: in an asset management system you need to expose and identifier of a user-registered computer, that is built using components procured from different suppliers, with their own identification schemas, e.g. exposing keyboard/mouse combo as one component with two subcomponents or two (possibly linked) components.

This requires you to use all those listed identifier types in different parts of the system. You can bake those in database/schema design, or employ some normalization and use "native" identifier and mapping tables.


Correction: Import duties are still generally (depends on product category, some are taxed from first Euro) waived on small parcels.

What parent refers to is small parcel exemption to VAT, however every larger retailer adjusted to that and are now declaring VAT on their end.

Personally, I would say this is a net win


> that they have to release in open source the software, blueprints or tools that are needed to be able to support your own device yourself.

You can safely bet that there is no end product [you care this regulation to apply to] that has 100% in-house engineering, unencumbered with licenses. This would be either unenforceable or eliminate all but the largest players from EU market.

If you look at a distance, most of these regulations simply mandate managing product lifecycle. Yes, you can enter the market quicker and cheaper if you don't think about eventual recycling or bodger together something that barely works. We take warranties for granted now, but warranties are part of this family of regulations: if you introduce a product to the market, introduce something that is actually functional.


> suddenly it's all the same music you've already been listening to, very little new music.

However, if you expose the gods of the algorithm to a new artist, suddenly all the auto-generated feeds will try to include that band regardless of fit. Weird how these "social graph" systems tend to form and perpetuate bubbles.

On top of that, there are some weird shenanigans with meta-data. Listening to "foreign" bands may very easily taint the weekly mix with songs in a language you don't even understand and probably don't care about. An anecdata of course, I just looked at my "daily mix x", which appears to be in my local language, but with styles all over the place. Another mix contains mostly correctly turn of the century romantic pop.

I suspect the algorithm biases heavily on metadata so that it could be easily fed "albums/artists that publisher x paid to promote".


> However, if you expose the gods of the algorithm to a new artist, suddenly all the auto-generated feeds will try to include that band regardless of fit.

cf YouTube when you watch one video on X that's outside of your normal viewing and RIP your homepage for the next few days until you've clicked "do not recommend" on enough videos to stop the flood of X and X-adjacent content.


In practice, US tech companies literally buy their way out. They pay such a premium for those independent contractors that there would be no such complaints in the first place.


No complaints based on the amount of pay, maybe.

But for example, someone who is fired or laid off in a way that wouldn’t comply with local employment protections if the employment relationship were correctly classified might assert their misclassification claim so that they can also get compensation for their wrongful termination.

If that happens, then the company not only has to scramble to catch up on the overdue social contributions for the complaining employee and pay any applicable penalties, but also likely have to undergo an audit of their other workers in that country plus the same consequences for them.

There’s a reason why any US tech company that’s big enough to be a juicy financial target tends to do this correctly, and why companies like Deel, Remote.com, and their less tech-branded competitors (such as Velocity Global) are gaining popularity among people who want to do this correctly at smaller scales than those for which it makes sense to set up foreign subsidiaries.

When smaller companies take this particular shortcut, are risking severe financial consequences for the company if the authorities discover it, and in many cases this also comes with personal liability for some of the executives who are neglecting their legal duties.


I concur. It might not be feasible in terms of computational power available, but I don't think there is anything fundamentally stopping application of those training mechanisms, unless the whole neuralnet paradigm is fundamentally incompatible with those learning methods.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: