Please ELI5 for me: How are AI agents different from traditional workflow engines, which orchestrated a set of tasks by interacting with both humans and other software systems?
But rule-based processing was exactly the requirement. Why should the workflow automation come up with rules on the fly, when the rules were defined in the business process requirements? Aren't the deterministic rules more precise and reliable over the rules defined by probabilistic methods?
Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.
I like determinism and objectivity as much as the next guy, but working in the industry for decades led me to realize that conditions change over time and your workflow slowly drifts away from reality. It would be more flexible to employ an AI agent if it works as promised on the tin.
There is no "reality" other than business requirements. That's the context for a workflow. You probably meant that the requirements aren't agile enough to meet the changing world outside. That's a different problem, I think. You can't bypass requirements and expect workflow to dynamically adapt to the changing reality. If that's the direction with AI-driven business re-engineering, then we are back to the chaos, exposing the business logic directly to the outside world.
Is this what will be tried to fix the potential fallout from continuously decreasing fertility rates (resulting in population decline, thus affecting the consumption-based economy)?
Nope. This is just greed to make most of the moment without any thought for tomorrow. Nobody knows or cares where it takes us, but everybody knows that there is money to make today. So you need a model to analyze the economy with greed as the only driving force without any foresight. Add some parameters to account for monopolistic forces, human desire to be lazy and dumb thinking it is progress, and losing all biological senses to devices. That may give a better prediction.
AI systems cannot be economic agents, in the sense of participating in a relevant sense in economic transactions. An economic transaction is an exchange between people with needs (, preferences, etc.) that can die -- and so, fundamentally, are engaged in exchanges of (productive) time via promising and meeting one's promises. Time is the underlying variable of all economics, and its what everything ends up in ratio to -- the marginal minute of life.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Replace "AI system" with "corporation" in the above and reread it.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
I think your mistaking the philosophical basis of parents comments. Maybe a more succinct way to illustrate what I believe was their point is to say: "no matter how complex and productive the AI, it is still operating as a form of capital, not as a capitalist." Absent being tethered to a desire (for instance, via an owner), an AI has no function to optimize, and therefore, the most optimal cost is simply shutting off.
Except they don't really "think" and they are not conscious. Expecting your toaster or car to never rise up against you is a good strategy. AI models have more in common with a toaster than with a human being. Which is why they cannot be economic agents. Even if corporations profit off them, the corporation will be the economic agent, not the AI models.
> Time is the underlying variable of all economics
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Depending on how you feel about various theories of development, an argument that all of these categories reduce to time. At the very least, the relationship between labor, capital, and time seems pretty fundamental: labor cannot be instantaneous, capital grows over time, etc.
They can all be related on a philosophical level but in practice economists treat them as separate factors of production. It's land, labor, and capital classically. Technology/entrepreneurship can be seen as another factor, distinctly separate from labor.
I agree that time isn’t an input in the economic system.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
Yeah, that’s well articulated and well reasoned. Unfortunately, so long as in some way these agents are able to make money for the owner the argument is totally moot. You cannot expect capitalists to think of anything other than profit in the next quarter or quarter after that
I think you may be missing out on the general idea of DAO's in general by restricting yourself to a few particular historical uses (and many a failed one at that) of DAOs, back from when agentic AI wasn't a thing.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
Well for starters if some incredible change to capitalism doesn't occur, we are going to have to come up with never before cooperative software tools for the general populace to assess and avoid the most egregious companies that stop hiring people.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
“…the only workable future to me seems like forcing agents/robots to be tied to humans.”
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
https://en.wikipedia.org/wiki/Accelerando
reply