Telling someone they got fired for performance reasons when that's not actually true is outright evil.
(Of course we don't know for certain whether that's what happened here, but it sure does look like it)
Yes. This is already done. It's how almost all submarine communication cables currently work. Most long-distance fibre links do not use electronics to regenerate their signals.
They use optical amplifiers, which take light at one wavelength and use it to intensify light at another wavelength. They're much like lasers (technically I think they count as optically-pumped lasers?), and they turn on from a very small input signal, effectively reenforcing it.
This can happen across multiple signals, on different wavelengths, in parallel. Like a broadband radio amplifier, it boosts everything across a large working bandwidth. There are even optical compressors (also powered by light), which speed up the baud rate of signals. That way a slow electronic system can produce the original pulses, and then they can be compressed to faster than electronics can work with, and then multiplexed with many other signals at different wavelengths, and this whole composite thing is sent down the line, amplified without decoding along the way, and then finally the whole thing is reversed at the other end.
This is the trick behind how fibre links are so fast, considering there are no electronics that can handle data serially at those speeds.
You're right about submarine fibers, but you seem to suggest that the pump light for the laser amplifiers is transmitted through the fiber from the cable landing point - like the technology discussed in the OP.
That is certainly not the case, the pump light is generated from electricity right where the laser amplifier sits in the fiber. No real amounts of energy are sent optically down the fiber. To power the amplifier, a high voltage DC line is designed right into the submarine fiber cable. And those things carry a lot of power, a long fiber cable will draw tens of kilowatts of DC for all the optical repeaters.
The reason is, of course, that thousands of miles of cable has a pretty insane optical attenuation, no matter what you do, because optical attenuation rises exponentially with length. The electrical resistance of a high voltage DC power line only rises linearily, on the other hand.
You're right about submarine cables running DC along the shielding/armor to power the optical amplifiers. However it's worth pointing out that there are so-called "repeater-less" systems that do use optical delivery of pump power to the amplifiers (typically they combine this with Raman amplification). Those systems can deliver high capacity communication (not sure where the record stands at but 100s of Gb/s to low Tb/s) over >500 km without any electrical connection (you still need power at the receiver though).
These are typically used for short submarine connections to e.g. connect an island. As it is much cheaper than running a full repeatered system.
Just to prove I never took physics, where are the photons actually going in a long distance undersea cable that makes it impossible to just flash a signal across an ocean sized length of fiber? (As I had assumed was the case.) Is it more a loss of clarity/resolution in terms of wavelength rather than the photons going astray?
> because optical attenuation rises exponentially with length
I believe the attenuation is stated in dB/km, therefore rises linearly (or even logarithmically if you look at it from the uncommon energy-wise point of view) with distance. Why should exponential be the case?
Imagine a length of fibre that transmits 90% of the light put in. Take the output, and pass it through another equal length. At the output of the second fibre length, we have 90% of 90%.
Efficiency of laser diodes goes down quite a bit with bandwidth. More importantly you typically want the data going the other way (from the powered sensor). If you would use the same fibre for both directions (might be done for space constraints) the issue of using the same wavelength is that there are scattering processes (some fundamental to how fibres work) in the fibre that will cause some light to be back scattered and act like noise essentially. Your sensor would transmit with only very little power so the SNR might be completely destroyed by the back scatter of the high power piwer delivery light. If they are at different wavelengths they can be easily separated.
I was thinking that the "power extraction" might attenuate the signal too much, and it would probably lower the power output since you need to modulate the light to transmit data, instead of having it on full brightness all the time. But maybe it would work for certain applications!
There's PSK and the likes which mess with phase but I don't think that's the same as what you're asking as you would typically then use something else to actually get separate streams with it. The problem with pure and plain phase division multiplexing is how do you separate it back out on the other side? You can make up as many pairs of wave values to create the resulting wave without doing something else beyond plain multiple phases.
Time division multiplexing and frequency or amplitude division multiplexing preclude most things because they are so cheap and simple these days. Polarity is also another knob to mess with when you need to squeeze more.
That sounds like a very Luddite view. Why wouldn't artists be able to use AI selectively to automate "boring" tasks (such as filling the sky of an image with clouds) while still retaining overall artistic control?
Because that is not what's happening. My friends that work as illustrators for PC and mobile games say it's the exact opposite. AI is used for the bulk of the creative work - composition, posing, even the general artstyle. Illustrators are then tasked with "fixing" visual artefacts, stitching together generated images and giving the final polish. They describe it as being reduced from a creative writer to a grammar checker.
It's tempting to just say that creative work that can be automated this quickly should be automated so that artists can focus on more creative challenges, but this is not how it plays out in practice. Rather, this only allows companies to cut down costs. It is already extremely difficult to find work which will pay a livable wage as a creative. AI has already caused layoffs and negative wage pressure on remaining employees. The only thing that AI has done (at least among my circle of friends) is reduce corporate costs and increase antidepressant prescriptions.
When I watch a video like the demo-video for the Krita plugin we're discussing (https://www.youtube.com/watch?v=-QDPEcVmdLI), I do see a lot of creativity happening. The person is using stable diffusion as a tool to achieve the look, style and composition they want. The skill to be able to use such a model for creating art is definitely an acquired skill, and I would definitely consider it a form of art.
Of course there will be people just clicking "generate" on a website. But isn't that the difference between consumer and artist? Everyone can press the shutter button on a digital camera to take a snapshot. But the artist knows how to use light, angle and technology to create a photograph with the looks and composition that they intend. (If you compare snapshots from amateur photographers and from professionals, the differences are astounding. And it's not just about the cost of the equipment.)
Certainly, there will be jobs – especially the rather repetitive jobs – that will be replaced by the use of AI, just like stock photos replaced jobs of certain photographers, or just like industrialization and automation replaced the jobs of a lot of craftsmen and artisans. But craftsmen and artisans are still around, and they are paid a lot more than they used to be paid, as long as they provide added value on top of the generic products available on the market!
I would never argue that you CAN'T do something creative with it. The problem is not even this single tool itself, but the greater amalgamation of all AI tools that arise from the general soceital phenomenon of using AI.
The problem with many technophiles is that they have a very distorted view of what they create. They often think it's going to do good because it's so cool but once that tech is out in the real world, it just mostly causes damage.
If you're interested, feel free to reach out to me because I am starting an anti-AI coalition.
Technology is just what it is. Good and bad are human categories that don't apply to technology per se (and are very subjective categories that change dramatically across time, space, and culture)
What humans use it for is another discussion.
One example:
- You can use nuclear fission to provide light and warmth to millions or blow up millions.
Is nuclear fission good or bad?
I would argue it depends what humans make of it.
Same with what you call "AI".
I wish you luck with your coalition, but once a technology is "out there", you cant take it back. I don't think there is an example in history where that happened, would be curious if you know one.
In a certain light smartphones resemble the moral equivalent of violating the Prime Directive.
"Here, rural areas and undeveloped nations. Take this crippled, distorted window into the greater internet. It happens to be much better at viewing content than creating it and will surveil you more closely than ever you watch it. The preinstalled software is forbidden to remove. Don't view it more than ten minutes a day or the content recommended by social media algorithms may cause malaise. Like and subscribe for more content."
I think you'd be better served making moral arguments rooted in ethical principles that people adhere to in real life, not science fiction.
This is especially important when you consider how unethical the Prime Directive itself is as a principle, and how often Star Trek portrays violating it as the morally superior choice.
The position you're advancing here seems to infantilize people in rural areas and undeveloped nations, and aims to deny them the agency to make their own choices about how to fit modern technology into their lives and communities. It sounds like a modern variation on "noble savage" and "white man's burden" notions -- not exactly a good look.
> The position you're advancing here seems to infantilize people in rural areas and undeveloped nations
I believe it seems that way to you.
Many people (in particular unemancipated minors) might likewise consider it infantilizing to place a minimum age requirement on drivers' licenses, firearms, alcohol, etc. yet the consensus is that doing so is for the greater good.
> Many people (in particular unemancipated minors) might likewise consider it infantilizing to place a minimum age requirement on drivers' licenses, firearms, alcohol, etc.
It seems unremarkable that we tend to treat actual children like children, but it's far less mundane to propose treating mature adults like children on the presupposition that due to their cultural or ethnic origins, they must exist in an immutable childlike state. The latter is an extremely dangerous notion, and we ought to be wary of anyone who advances it.
> yet the consensus is that doing so is for the greater good.
I'm not sure that any 'greater good' calculus is part of any consensus whatsoever.
AI here to clone a voice was used to make a mother think her daughter had been kidnapped
(2) People getting fired from their jobs such as illustrators because AI can now do things. Also, people NOT getting hired when they could.
(3) I am a professional writer, and I know of some websites who are using generative AI for articles and hiring less (or even firing writers)
(4) AI removes what remaining reliance we have on each other and makes it less likely for people to talk to each other when needing some basic information. The societal effects of destroying communities where people need each other are pretty clear.
Ok but that can be said of any technology. Chemistry is bad because someone used it to poison their friend. Phones are bad because it can be used for bomb threats, cars are bad because they put out of work the whole horse industry and you can go on and on forever.
Every single technology can be abused but it doesn't mean that they mainly cause damage.
(1) You are right, and that is why we should be much more cautious with technology.
(2) AI is unique in the sense that it has a much wider range and acts much faster. Therefore, it is much more dangerous, similar to how both salt and sodium cyanide are dangerous but the latter is much worse. You need to think in terms of the magnitude of the effect, not just its qualitative nature.
That's actually a problem for the business model of mobile games. A consumer can - or very soon will be able to - pick up AI tools and cut out the middleman org churning out these illustrations, just like they cut out the professionals. It won't be too long before games are made that advertise "put your original characters in the game", and it won't be some complicated character creation tool - it'll be generative stuff.
There's a lot of "but wait, there's more" in what's happening around AI.
> I think AI use in art tools is inevitable, but replacing artists at any level is not a good thing.
Everything in the computing space has been shifting labor from one skillset to another skillset and maximizing the output per hour worked so that fewer workers are needed for the same output (but also more tasks are worth doing, because the costs are lower for any given benefit.) Why is displacing people manually building the visual component of video games any worse than, say, displacing typists, secretaries, people delivering interoffice mail -- all of whom also had salaries, dependents, and livelihoods -- while increasing the value of work in the field automating all those things?
I am a luddite and I agree with most luddite sentiments.
Most of this generative AI is NOT about using AI for boring tasks, and have you ever even tried to draw clouds? Not easy. Everyone draws clouds differently, which you would know if you ever tried to draw anything.
Moreover, AI as a societal phenomenon goes way beyond AI drawing clouds.
> which you would know if you ever tried to draw anything
I know exactly how hard it is to draw anything because I tried a bunch of times, and failed. I for one am happy that I can now express my creative ideas, which I couldn't do before due to missing talent / practice.
You're free to personally be happy that you can express your creative ideas, but it is a bit absurd to expect people who did put in the effort in practicing to not see you in a negative light as someone who wants the 'benefits' without putting in the hard work of self-improvement.
This is a uniquely AI related issue, as artists of all mediums can relate with each other about their struggles learning and improving their skills and ability to express themselves.
That's trying to put words in my mouth. We were talking about creative expression being taken away by AI, and I argued that artists can still retain creative expression, and that these AI tools make it possible for more people to express themselves creatively.
I never said that artists should have no reason to feel unhappy about that. That's criticising a position I didn't argue.
“I for one am happy that I can now express my creative ideas, which I couldn't do before due to missing talent / practice.”
The problem here is we need to look beyond our own self interest to how this will impact other people.
We don’t make a career out of art. This technology is just a novelty to us and but many others rely on it for themselves and their family and had no way of foreseeing the technology coming. They need it more than we do.
> Most of this generative AI is NOT about using AI for boring tasks, and have you ever even tried to draw clouds? Not easy. Everyone draws clouds differently, which you would know if you ever tried to draw anything.
Perlin noise on a plane, can be either in line with the camera or off at an angle. Nice effect. Very easy. I don't even count myself as a proper artist.
Clouds can obviously be hard when you have a specific cloud formation in mind — but "just" a random cloud, to the standards of most who will observe it, is much easier.
And of course, there are plenty of free photographs of clouds, and Photoshop has had plenty of filters — even from the days before people had broadband, let alone what people now call AI — to turn those photographs into different styles.
> Perlin noise on a plane, can be either in line with the camera or off at an angle.
This looks like trash and doesn't look like clouds. Even if you're doing procedural clouds, everyone does them differently. And a lot better than just slapping Perlin noise on a plane. Photoshop filters cannot change the bones of a cloud, and when people are illustrating clouds they're taking entirely different approaches. They're not just "this cloud, but flat" or "this cloud, but with a fuzzy diffused look." All you're doing is showcasing your own lack of knowledge on the subject while filling the arrogant techbro stereotype perfectly.
I suppose the vendor could sell a home server device, which runs some kind of Tailscale-like technology to make it available from the internet, and the app talks to that locally hosted server.
My teacher made me do a full force brake during my training. It was more intense than I had anticipated, and my teacher forgot that he still had his sunglasses on his head, which promptly smashed into the windshield :D
Defining "normal" code as "not having UB" is quite disingenuous though, isn't it?
Iterating over a vector while adding elements for example looks normal, but isn't generally safe, unless you know to pre-allocate enough memory.
I work with both, having started with C++ about 17 years ago, and agree that Rust feels like a relatively simple language compared to C++. Rust might feel harder to learn initially because the borrow checker won't let you compile certain programs, but once you are over this initial hump, the rest is quite straightforward.
I don't really agree with that. I'd say they're complex in different directions. C++ has complexities Rust doesn't because it bends over backwards for source-level compatibility. A lot of it is entirely at the semantic and pragmatic level. Rust's complexities are mostly due to its type system, meaning the complexity is at the syntactic level. I had never seen a computer crash because an IDE was trying to figure out how to parse a program before I worked with Rust, for example.
"Too much anime" was not a phrase I expected to see in a compiler bug report.
Internal compiler errors do happen from time to time. They're annoying but usually easy to work around. I've had projects where I just cannot use rust-analyze because it just never finishes. It just eats RAM without accomplishing anything.
>I had never seen a computer crash because an IDE was trying to figure out how to parse a program before
This happens daily on my Intel MBP in Xcode. In only a ~15k LoC small app, 99% Swift. I’ve had to break up several functions into smaller ones purely because the compiler chokes so easily. They actually have a dedicated error message to the tune of “couldn’t figure out types here, could you break this function up please?”.
But yeah, outside of that I’ve never seen it happen in major languages using major language tooling. Never even saw it in 5 million+ line C/C++/.NET CLR mixed codebases.
> I had never seen a computer crash because an IDE was trying to figure out how to parse a program before I worked with Rust, for example.
C++ is complex enough that the IDE can't really parse much of the program's code in any useful fashion. You're lucky if it can get the right type hints and jump to definition. And even the latter may not be complete.
Contrast with e.g. Java, which makes it easy for the IDEs to get the full picture.
Sure, but in those cases the parser just gives up. It doesn't grow its working set trying harder and harder seemingly forever.
We're talking about C++ and Rust here, so I don't know why you bring up Java. If parsing Rust was as easy as parsing Java you would not see me complaining about it.
Giving up vs. crashing is a trivial difference, ultimately boiling down to an extra terminating condition connected to real-world resource use. Either way, the parsing is useless.
I brought up Java as an example of what it means for the IDE parsing to work and be useful.
What kind of IDE are you working in that will lose your work when it crashes?
I don't know what's going on in the Rust world, but in C++ world, even the good ol' Visual C++ (2017, 2019), when it crashes (which it does surprisingly often on my work's codebase), it doesn't lose anything other than maybe unsaved edits. It's annoying, sure, but 30 seconds later it's back to where it was before the crash.
Also, a not working parser is not a trivial inconvenience. It just means the tool doesn't work. From the POV of wanting to use advanced functionality that relies on parsing the code, it doesn't matter whether the tool aborts its execution so it doesn't crash, or has to be disabled because it doesn't abort its execution and just crashes. The end result is the same: I can't use the advanced functionality.
What I said was that the computer crashed. The IDE used so much memory that it took the system with it. When it came back up something weird had happened to the environment and it was ignoring the stuff in .bashrc.
>Also, a not working parser is not a trivial inconvenience. [...] The end result is the same: I can't use the advanced functionality.
Yeah. Now compare "the IDE is just working as a glorified text editor" to what I'm describing above.
I'm sorry for misunderstanding your earlier comment, and thank you for clarifying. I can see how this is a much more serious problem.
However.
That sounds to me less like an IDE problem, and more like a Linux problem. Specifically, the problem with the... unique way a typical Linux system handles OOM state, i.e. by suddenly curling into a ball and becoming completely unresponsive until you power-cycle it. I've hit that a couple times in the past, and my solutions were, in order:
- Monitoring the memory usage and killing the offending process (a graph database runtime) before it OOMs the system;
- After becoming tired of the constant vigilance, quadrupling the amount of RAM available in the OS; (yes, this is why people overprovision their machines relative to what naive economists or operations people may think..)
- Changing the job, and switching to working on Windows; WSL may not be 100% like a real Linux, but it inherits sane OOM handling from its host platform.
I'm sure there is a way in Linux to set a memory quota for the offending IDE process. This would hopefully reduce your problem to the more benign (if annoying) case I described earlier.
I actually run Windows mainly. This was inside a Hyper-V VM with 32 GiB of RAM. I'd like to be able to work on this project from Windows, but unfortunately I can't, and don't have the energy or inclination to figure out how to get it building on Windows. I already knew rust-analyze had this problem, which is partly why I allocated so much memory for the VM. Unfortunately I triggered a rust-analyze rebuild just as I was already building another codebase in a different directory. That's what brought it over the edge.
While I agree that Linux sucks at handling this particular situation, my point was about Rust's complexity. Normally, when you're using C/++ dependencies the IDE doesn't need to inspect the internals of the libraries to figure out how to parse dependent code. And, it's also true that rust-analyze doesn't know how to stop trying. It will parse what you give it, even if it kills your machine.
I had emacs crash (well, lock-up) while trying to parse the error messages from some extreme C++ metaprogramming. It was at least a decade ago and both emacs and C++ have improved, but still...
edit: mind, in the same code base (10M+ loc), Visual Studio would crash while attempting to initialize Intellisense.
Ever since I learned of sum types, they have ruined my enjoyment of programming languages which don't have them. I sorely miss them in C++ for example (and std::variant is not a worthy alternative).
I don't understand why any new language wouldn't have them.
Pedantic typechecking is like learning to spot improper kerning, you think it’s a good thing but you spend your entire life cringing at the world around you.
std::variant is a good example of many things bad with c++ improvement process, as a language.
If you want to just pattern match on type of visitor there is “another convenience helper” that you need to bring, and result still looks not pleasant.
Introduced in like c++17, even in c++23 you still need to write a std::visit to process it. Committee members waste time on yak shaving that std::print