"So-called high frequency trading firms place trades in a fraction of a second, sometimes in a bet that they can move faster than bigger competitors."
First off: no. Big money plays in high frequency trading (roughly half of all trading activity), and the smaller traders without instantaneous access are the losers in this game.
Secondly, NASDAQ's obsession with precise global sequencing is A) misguided and B) effectively impossible to do right 100% of the time. Given this, I would argue that the appropriate thing to do is change the market requirements. And I'd argue that like this:
1) Temporally quantize the market. Orders come in on an open temporal window that is sufficiently long to account for global latency of non-pathological communication (sorry, tor users) and a bit of computation time. Everyone gets to swim in the same pool. Maybe one second, maybe more. Nobody gets to see the order book until it's resolved. Write-only.
2) Lock the book and fulfill orders from the set of satisfiable orders. If there just contention for a trade (there will always be some), fulfill the contentious trades randomly using random zeedig generated from a pre-announced salt and a hash of some or all of the order book for the window.
3) Return the results and the hashes of the order book, next salt, etc, for verifiability and prep.
4) Re-open the order window.
High frequency traders would hate this, because they wouldn't be able to pounce on quick movements, even without fronting slower traders.
It would, naturally, increase latency for trades by virtue of having to wait for market resolution. However, mere sequencing doesn't solve the problem of having to resolve and confirm trades (the speed of light is so cruel), so I'm left utterly unsold on the market-efficiency benefit of ultra-high order resolution. Wealthy high frequency traders want to use time to buy an advantage, and the liquidity support they provide to the markets is dubious, at best, since they pull the plug as soon as things get crazy.
Most HFT shops are relatively small. HFT is all about latency and turn over. Big quant shops might have HFT elements but lean far more towards systematic/algo strategies that can be relatively high latency (still super low latency, but not HFT) because these are the only strategies that you can deploy serious var with. The guys crushing HFT are not huge hedge funds, and they are solving more engineering problems than building trading models.
Also, no, small traders don't lose. Retail traders et al get much tighter spreads, cheaper execution by routing to internalizers, etc. It's big institutionals with size to trade that get front run and have to worry about HFT killing their shortfall. On the institutional side, it's about lit venues preferring HFT w/ special orders types and thin top of book. On the retail side, the issue mostly comes down to direct feed vs. SIP/CQS thanks to NBBO that opened the door for latency arb courtesy of yet more regulation. Blame your regulators folks. This is why dark pools became a thing.
> and the liquidity support they provide to the markets is dubious, at best, since they pull the plug as soon as things get crazy
This bit is certainly true.
Source: hedge fund trader who hates HFT not in principal but because they are good at what they do
This used to be true. Small HFT firms can find a niche thats profitable, but thats because there isnt enough money to be made by the big firms in those niches.
In the last few years there has been massive consolidation of the smaller HFT players, the space is commoditized and controlled by a few firms.
Yes, consolidation and layoffs. Go see how many KCG guys were kept by Virtu. If anything the industry has gotten smaller. People are just buying up flow now. There are no "big" HFT firms when compared to proper buy side.
> the space is commoditized and controlled by a few firms
Yeah its funny, I still read outrage about those greedy "HFT players", as if HFT was still highly relevant. It was a blip in time of the financial markets when no one had high speed trading but a few. The alpha has been washed away.
HFT has reduced costs under 99.99% of market environments. My direct cost and slippage is still so much lower than it would have been 30 years ago. Hell, even 10 years ago.
They would simply vaporize. At the margin, people trade because the frictional costs (spreads, fees, pricing/tracking error, risk) of trading are low. Fewer people would trade.
HFTs basically play an intermediary role: risking capital to buffer supply/demand imbalances, aiming to buy things at a discount or sell at a premium to their perceived value. The more transactions an intermediary does, the smaller his margins per transaction can be. Low margins fuel even more transactions in a virtuous cycle, and competition drives margins down.
Take this thought experiment to an extreme level. What would happen if short term speculation were banned, all stocks traded January 1, and had to be held for a year? Only very wealthy people with high risk tolerance could participate in the market, since they couldn't sell companies at will to fund personal expenses or if the business underperformed.
Volumes would plummet. Exchange/brokerage fees would be a significant percentage of the deal size, similar to what real estate agents charge, since they can only do a few transactions. Intermediaries would be something akin to a private equity fund, bidding 10-20%+ under value to cover the risk of holding for a year.
Even with trading reduced to once a minute/hour/day, many trades HFTs take the other side of now--say a medium frequency quant fund believes a company is underpriced by 0.1%--simply would not exist anymore, because spreads and fees would increase. Most ETFs would disappear. The marginal cost for an HFT to make markets in some small ETF is basically 0, but a human would make more at McDonalds than market making an ETF that trades a few hundred thousand shares a day.
As noted elsewhere in this thread, I suspect new markets would spring up. If the underlying could only trade once a year, the options market would be huge.
They would go to whomever else was marker makers, whomever else was doing short-term trading, and some of fhe rest to quant firms doing frequent but not hft trading.
Yes, this is indeed true. Not sure why you're being downvoted. The distinction is still one worth making, because average retail investors can typically have better transparency on this in a mutual fund vis-a-vis tracking error and management fees, whereas a retail trader in the open market is going to have almost no idea where the market really was for that market order they dumped into their Etrade account.
Can you explain to me why an index fund would pay less but a hedge fund would pay more? They are all buyers in the market, and index funds are more predictable, as they have to (roughly) adhere to their index. Once an index changes, they have a limited time to buy or sell. GE being kicked out of the Dow last week is a great example.
> Can you explain to me why an index fund would pay less but a hedge fund would pay more? They are all buyers in the market, and index funds are more predictable
And that's the reason. Market makers (and especially HFTs) profit from razor thin spreads on predictable orders, but they can lose money when they get hit by a big unpredictable order, so they avoid them and/or charge them more. A hedge fund's order is inherently dangerous to a market maker, because they have no idea before the fact if the hedge fund is just offloading 1k shares to rebalance their risk profile, or if they're liquidating their entire position, or taking a big short position. Hedge funds can change the entire market. Some guy calling up his broker and asking to sell his Apple shares won't.
So retail orders and index funds are safe, so they can be charged lower spreads. And because they're profitable, market makers compete for the volume, driving down prices. And the data supports this - prices paid by retail investors has crashed, and complaints from hedge funds and big active investors has spiked. :)
"Also, no, small traders don't lose. Retail traders et al get much tighter spreads, cheaper execution by routing to internalizers, etc."
There's liquidity until there isn't. It was easier to get an order filled during a run to the exit pre-HFT. When everyone runs to the exit in an HFT world, retail investors are the last to get their orders filled, if they're lucky.
Manning rule dictates that retail orders held by a market maker must be filled before any other orders (or equally that fills must be given to the retail order), brokers don't look kindly on firms that reject customer orders with any regularity, and anyways retail flow on a volatile+wide spread symbol are loved by market makers.
I don't know if this is true, but higher liquidity and lower spreads reduces the cost of every trade, which is money in the pockets of retail and value investors. I assume this is why Vanguard says HFT has been helpful for them (despite the fact that they don't do HFT themselves).
Aren't the cost of the trade and price of the trade two different things? Tighter spreads can only do so much to offset disadvantageous pricing, right?
No, you cant. The spread is the difference between the bid and the ask. You can't have disadvantageous pricing without affecting the ask. Which would in term drive up the spread.
People believe all sorts of weird things, but the actual offense of front-running involves an agency relation: it occurs when you work with a broker/dealer to order your securities, and upon receiving your order, they trade for their own account ahead of yours.
I don't know if "esoteric" is the word. You're saying, lots of people seem to believe that the advantage fast electronic market making has over "conventional" trading is a form of front-running.
That is true. But: it is not.
Lots of people also believe that high-end market research (for instance, targeted research and maybe even electronic surveillance about how many widgets a company has sold) is a form of insider trading. But: it is not, even though lots of people say that, and for the same reason.
In both cases, people believe there is something shady about people going to extraordinary lengths to obtain a trading advantage. And, in both cases, not only is the market resilient to those efforts to gain advantage, but the markets are theoretically improved by them. The point of a market is to expediently arrive at the best (as in, most reflective of intrinsic value) price for something, and to make it efficient for people to buy and sell at that price.
I am aware that people believing a thing doesn't make it true.
I am also aware of the arguments in favor of HFT. As you stated, the oft-made claim that the market is improved is theoretical as well; hence, is also a product of "belief".
It's not a settled question. [0]
>The point of a market is to expediently arrive at the best (as in, most reflective of intrinsic value) price
The improvement I'm talking about is objective: in the former case, by competing down spreads and minimizing the cost to execute any given trade, and in both by expediting price discovery.
You can disagree that these are things worth optimizing (though if you weren't careful you'd risk arguing in some sense against the premise of a market), but it's less clear to me how you'd argue that the causality is other than what my argument says it is.
How does that question even make sense? By definition, the spread is a tax investors --- including small investors --- pay to buy or sell a holding. In what way could they possibly benefit from wider spreads?
Regarding Flash Boys: I don't know of a single person who works in trading who has stuck up for that book. I strongly recommend "Flash Boys: Not So Fast", which debunks it but is also much more interesting from a technical perspective than Lewis's book.
Yeah and the old DMM's used to stub quote when things got rough. Same shit, different day. Under most environments, HFT has been a net positive particularly in the single name options market. Pretty much every name out there is quoted with decent depth because an algo can now quote a few vols either side and make decent coin given that it costs nothing to stay laid up these days.
> First off: no. Big money plays in high frequency trading (roughly half of all trading activity), and the smaller traders without instantaneous access are the losers in this game.
Vanguard is big money. Blackrock is big money. Fidelity is big money.
These big money vehicles are where most Americans, that have any investments at all, have their investments. So, honest question, should we care that smaller traders are the losers in this game?
Interesting idea, but how would you deal with these issues:
1.) Randomizing who receives contentious trades will just encourage order splitting and gaming. Sure some of that can be banned, but nothing stops big firms from putting each trading group into different legal entities or other tricks.
This also discourages traders from bidding their true most aggressive price. In time priority, you must, or someone else will snatch your trade. If you remove the reward, why take the risk?
2.) Being fast would still matter. Reality isn't quantized, so having access to relevant real world information or a proxy for such (trading activity in other markets or products) would still be an edge. Existing quantized trading points such as exchange auctions are still latency sensitive.
3.) The modern marketplace is interconnected. No ETF market maker will quote a tight spread if he can't confidently hedge his risk in the individual stocks. Going into a one second auction with random allocation is a lot riskier than just hitting the bid on Nasdaq, maybe paying an extra penny in the rare case when you're slow. A lot of liquidity comes from people running these arb/stat arb trades. It tightens spreads and helps keep prices in line. Why harm it?
4.) There is more to HFT success than speed and I don't think this would hurt them too much or take us back to 1997 with day traders sitting at home making big money. Virtu or some other HFT shop was the biggest trader on IEX, and they have a speed bump similar to this, just less extreme.
For 1), I'd randomize with proportional unit (share) representation, and I'd certainly be open to rule-prioritized execution (e.g. most-favorable taker first) if it didn't lead to degenerate incetives. Smart market design can incentivize people to play at their best price. For example: locational marginal pricing in wholesale energy markets...
2) I agree that the real world is quantized, but I think that a settlement tock to the bidding tick could be used to reduce the value of proximity. IEX actually implemented general latency with long runs of fiber, which is a really elegant fix. They couldn't make the rest of the world latent, so it's something like 700 microseconds, enough to remove colocation advantages, but only enough to solve for New York.
3) As far as I know, HFT's like to play in limit order and derivative books. It's where practices like flashing and spoofing have come from. Market orders are fraught with peril, especially if you don't know the matching rules for the exchange. As far as tightening the spread and aiding price discovery, I don't think that those two things are the same. If a security has naturally low volume, responsive high frequency trading can effectively be predatory.
4) I agree that there is more to HFT than just speed, but I view high frequency temporal arbitrage as an unnecessary market feature that provides the illusion of liquidity right up until that liquidity would truly be useful (since robots get benched when things go strange).
Granted, the temporal steps that I'm advocating here are a little provocative. The US could be solved in something like 200ms, and larger global markets, like currency exchange, are already fairly decentralized (though not as much as they used to be, as far as I know).
Either way, I don't think that NASDAQ can assure global temporal coherence, especially without controlling the entire network. Given that, it makes sense to design robust systems that don't pivot into rare modalities in exceptional cases. Just pull clock slew off the board.
1.) If you do it proportional to shares, then it introduces bad incentives to oversize orders. There's a reason why almost every market in the world uses price time priority in a realtime two sided auction.
2.) What problem does this solve? Proximity is freely available and relatively inexpensive. Barriers to entry for professional traders are much lower than the days of buying exchange seats. 10s of thousands a month sounds like a lot, but it's nothing compared to the costs of running a trading operation.
You could give every man, woman and child a rack at Nasdaq with a nanosecond trading system, and they wouldn't make any money. Proximity only matters to traders running latency sensitive strategies. These strategies have low margins per trade and can only profit through scale. Running them requires robust systems that take years to develop, capital, smart researchers, and data.
3.) Spoofing is illegal and people go to prison for it. HFT is just a catch all term for executing short term trading strategies with a computer. Most HFTs make their money through market making, arbitrage, stat arb, or some blend of those. All profitable trading can be cast as predatory, but that doesn't make it bad. Having accurate prices and more quotes in the market is a public good.
4.) So you believe it's good if S&P 500 futures go up 2%, nobody arbitrages the S&P 500 ETF, and John Smith comes to the exchange and sells his ETF shares 2% below their value? I'm guessing not.
Odds are you believe arbitrage and efficient pricing are important. If you believe that, then someone should do those trades, and they'll earn profit as a reward for correcting the price. Why shouldn't it be the person or machine who does it first and for the lowest possible margins?
I don’t think this helps enough. It’ll keep people from pouncing on NASDAQ very quickly due to NASDAQ movements, but instead people will play the game of trying to be the last one into the window when trading based on information from other sources.
Some form of clever randomization might help, but getting that right is very complicated.
I think you could account for hold-out by having the resolve window be longer than the trade-insert window, at least to an extent.
The trade-insert window can't be too long, or it would leave lots of room for regret, like people who voted by mail for a candidate caught in a scandal two days before an election.
If I place an order to buy 1 share for $1 and another share for $1.01 because the market data shows that there is only one share available for $1, then the exchange needs to process my $1 order before my $1.01 order. More complicated scenarios exist, too, especially once you take into account resting orders and modifications to existing resting orders.
Not an expert so I might be totally off, but I think one of the risks is the existence of alternative markets.
Let's say you're only allowed to trade once a week, and I buy a bunch of Hooli stock at, say, $25 today. Then they launch Yet Another Hooli Chat tomorrow and everyone has decided that's going to make the price go up. You might want to buy some shares at $27 right now, because you think they'll be worth $30 next week. And maybe I want to see my profits right now for whatever reason (maybe I worry about them shutting down their new chat service by next week and I have a very low risk tolerance) and I'm happy to see some profit now instead of maybe more profit next week. Obviously I should sell to you. I can't do that via NASDAQ because we can't trade again until next week, but if you and I are in contact, we can just trade privately ("over the counter").
If lots of people start doing that, and we all join up, we essentially become another stock exchange. So it's not in NASDAQ's interest (or anyone's, really) for them to voluntarily stop doing things that will just cause another stock exchange to exist that does those same things. The options are either to convince market participants that the new rules are actually going to be better (more profitable) for them, or to lobby for regulation that makes the old rules impossible for someone else to implement.
As I understand it, this is basically the origin story of NASDAQ: the National Association of Securities Dealers believed (correctly) that they weren't getting good prices on existing stock exchanges, and computerizing a stock exchange had just become feasible, so they built an Automated Quotation system that initially just published prices to each other efficiently. Eventually it turned into an actual exchange.
Dark pools are a special case of this phenomenon, yes, but they're different from regular stock exchanges precisely because they're dark. Some market participants prefer them, and some don't. But a hypothetical "light pool" would be strictly preferable to NASDAQ-but-weekly - it's almost always in every participant's individual interest to trade immediately instead of waiting up to a week and randomly getting orders filled or not.
I'd like to see qeternity chime in on this, since it's their space, but as market that is too rigid can't actually provide sufficient liquidity to users to function as an effective market.
Let's say you're comparing a savings account (1% interest, a day to withdraw), six month certificates of deposit (3% interest, six months out), and cash in your pocket (%0 interest, instant), and the goal is to account for market fluctuation in Laffy Taffy and come away with the most Laffy Taffy in a year.
The cash doesn't appreciate, but you can buy whenever the price changes. If Laffy Taffy is really volatile, working in cash gives you the flexibility you need to maximize your trade value and buy/sell at the right time.
If Laffy Taffy never sees a price change, put your money in that CD and wait it out (assuming no other available investment vehicles). You'll be able to buy a little more pancreatic strain in six months.
The savings account is somewhere in the middle.
A really large quantum for a market that attempts to serve as a proxy for real world fundamentals could be prohibitively risky for participants, and that could reduce overall market participation.
I suspect that there is a sweet spot, and I assert (without evidence) that the sweet spot is greater than the time it takes light to circle the globe.
Capital gains taxes are the same if you hold for a minute vs a month, but that doesn’t have to be true. I wonder if addressing the issue raised by the OP might be approached from that angle?
Market makers have absolutely no incentive to quote tight spreads up until the last minute where they might have some idea about the price, or maybe not so they don't quote at all.
A lot of 'alternative solutions' to the continuous limit book assume that there's some other mechanism for price discovery so people make informed decisions, but really it is the book itself which provides that information.
None really. But having a tiny window privileges those who pay constant attention to the market. If the resolution was once a week all manner of riff raff could then invest and understand what they were doing.
Bear in mind that exchanges are owned by the companies that trade on them - they've got a VERY strong vested interest in not fixing the problem.
HFT works because fast traders can see a buy and sell order that are a distance apart, buy from the seller, then immediately offer to sell it at a fractionally higher price. Because they can see the buy order at all times, they know they can sell what they've just bought and make a tiny profit. All they have to do is ensure they can see the buy and sell orders faster than anyone else, then keep hitting those orders. Repeat until retirement.
Several years ago, a new exchange was set up to try to address this problem by simply putting a huge coil (many km) of fibre in front of the system/s that accepted orders. This approach meant that, while HFTs could still see orders before anyone else and hit them faster, their speed advantage was largely lost due to the latency as their order placement was just a bit slower. As a result there was no guarantee the 2nd part of the order those HFTs were hitting was still there, so they might get left holding a position without being able to sell it at the profit they were used to. Each HFT could try to front run orders as they do now, but the fairly small increase in latency killed their advantage.
HFT companies managed to avoid using this exchange very successfully.
> HFT companies managed to avoid using this exchange very successfully.
You mean they didn't accidentally trade on IEX? That's not surprising.
Also, I would believe that if you're doing pure latency arb, then trading on IEX isn't profitable, but there are other high-frequency strategies besides pure latency arb. Are you restricting your definition of HFT to pure latency arb strategies?
If you quantise the market, someone else will start a secondary market for the same commodity and trade there, waiting till the close of the one second window to arbitrage to the main market.
People who participate directly on the main market then effectively loose out, since you are doing your order blind in the future while other people on the secondary market are trading real-time and have more information than you at the close of the 1 second window.
This doesn't solve the problem. There are multiple exchanges. If a real time exchange experiences a price drop then you could quickly sell stock on the quantized exchange with the hope of being randomly chosen.
Why not, instead of quantizing the market, just have a 1 second (or whatever) sliding window where trades are locked prior to resolution. In other words, you can put an order in whenever, but it resolves at least one second in and can't be cancelled for two seconds (unless the trader idiotically puts a cancel order in without any extra information)
I'm a slow trader -- a few trades a year. I like, or at least don't mind, HFT.
It certainly doesn't hurt liquidity. There is, in principle, always someone who wants to trade with me, and the increased volume may make price-discovery a little more accurate.
When I place a limit order, I don't mind at all if someone has managed to front-run my order and sell it to me at the price I set. I got the thing I wanted to buy at the price I wanted to pay.
HFT is only troublesome if you play with fire -- market orders. The price you see may not be the price you get with a market order, as the market can do irrational things or front-run you. That risk is blunted entirely by limit orders. Your order might not fill, but when it does fill, it will only fill at or better than the price you asked.
I'm pretty sure you misunderstand market orders because by definition they are only visible to the market after they have traded, so nobody can get ahead of you or do sonething irrational before your order lands.
I would agree that market orders are a bad idea at any meaningful volume (outaide of retail sizes) because of liquidity and routing reasons
> I'm pretty sure you misunderstand market orders because by definition they are only visible to the market after they have traded, so nobody can get ahead of you or do sonething irrational before your order lands.
Have you read Flash Boys? Dunno if the loop-holes have all been solved, but basically it was possible to front-run orders. There was a regulation requiring brokers to execute an order on the exchange that had the best current price. This rule gave no weight to size. So HFT firms could place a tiny, negative expectancy order on one exchange. Then they could see the result of that trade and cancel/place orders on the next exchange that your broker's matching algorithm was going to hit up before your order got there.
Yes, I work in the industry. Flash boys is misleading garbage, it's a long form advertisement for IEX.
What a broker does has nothing to do with how market orders work. The strategy you're describing also doesn't really work because any respectable broker is sweeping all of the exchanges at once - the regulations considered this possibility and allowed this behavior. Also, many of the liquid symbols have single cent spreads making this strategy impossible.
Then this whole discussion is nonsensical since in almost all cases retail orders are directly filled by wholesale market makers and don't ever land on the exchanges (and don't see any 'frontrunning' as a result).
Sure. All I was saying is that it's theoretically possible for a market order that a customer submits online to be front-run (as opposed to a market order submitted to a single exchange). No clue how often it happens in practice.
It's theoretically possible and has always been for somebody to see order execution in progress and trade ahead of it. In practice, that's frequently just a side effect of somebody being so slow that their actions trigger quant algorithms, it's not a super profitable game trying to latency arb proper market sweeps anymore (not to say latency is not important in but, but it's usually for other reasons)
I maintain an NTP implementation. That comparison doesn't seem fair to me. It looks like they are comparing the old reference NTP implementation and not really the protocol itself. An NTP implementation can certainly synchronize clocks with better accuracy than 1 millisecond, or even 1 microsecond with hardware timestamping and good network switches.
There are some interesting ideas in the Huygens paper, but I don't see anything that couldn't be done also with NTP.
The other thing I don't understand is that this paper argues against hardware timestamping on the grounds that users won't want to buy expensive hardware, and that Huygens is for "standard hardware ... in current data centers". Expensive, niche hardware is normal for the HFT folks that care about nanosecond precision.
Yes, it seems that the designers tried to make something that would work in the current Google data centers. Which is not surprising as some of the authors work at Google.
I'd more want to know: why does Google need even better synchronization of time stamps than what they have now?
Good support for PTP in network switches is rare. If the users don't want to spend a lot of money on new switches with PTP support, they can synchronize clocks in the network with a protocol which doesn't require special support in switches.
It is great news if this algorithm can work directly on the public Internet, without requiring a specialized network - many scientific and engineering applications will be able to get its time reference directly from the Internet!
NTP and other protocols currently used are unauthenticated (there is NTP autokey, etc, but its security properties are not ideal, and mostly not deployed) and it is a big security hole, especially more and more cryptographic programs are being putting online, since this protocol is meant for financial applications, hopefully the security issues can also be solved by using digital signature.
I guess it's a rare scenario which the finance industry makes a _direct_ contribution to technology.
If I'm understanding the Huygens paper linked in one of the other comments correctly, this is strictly in-datacenter only; it relies on properties of datacenter networking that don't apply to the broader internet, in particular that routes are mostly symmetric and mostly low-latency.
Haven't read the paper yet, but even it is data-center only, it is still very relevant for science, e.g. DAQ in nuclear physics, especially if it doesn't need custom hardware.
I can't imagine how you could hope to reach nanosecond-precision over the internet without changing all the routing hardware in use today. With PTP you can already reach sub-microsecond synchronization but you need full hardware support for every network element on the route if you want to achieve that. The timestamping is done on the network PHY itself, which is how you can remove all jitter introduced by the kernel stack. Anything receiving and re-transmitting the packet along the way must update the timestamps to account for the processing delay.
Huygens is a little more clever and uses a statistical approach to sample multiple clocks and correlate them (if I understand correctly) however it still seems designed to work within a datacenter, I assume that over the internet the signal-to-noise ratio for measurements would worsen very significantly and lower the precision dramatically. It could well still outperform NTP however.
The signal-to-noise ratio would be a nuisance, naturally, but there's a more fundamental problem that's stopped anyone from trying this. Internet routing and latency is asymmetric - your request to the time server will, in general, go via a different route that takes a different amount of time compared to the message back. This introduces a bias that is indistinguishable from local clock error. Without precisely synchronised clocks at both ends, you can only determine the overall round-trip time and not the time each leg took, and therefore you cannot calculate the exact difference between your local clock and the server. This is an inhererent limit on how accurate NTP can be over the Internet. (The other reason not to bother is that GPS is cheap and can provide a very accurate synchronized clock source over long distances.)
It would probably be more accurate to install a stratum 1 NTP server on your phone since it has GPS. But then you'd probably get hit with jitter from Wi-Fi/Bluetooth/USB.
You need to get the PPS to some other device, then it could be done. The whole GPS stratum 1 thing is mostly just PPS. You get the correct time with ntp, and the start of a second with the PPS from the GPS.
This technology will also be excellent for games. Right now, the server has to decide how it's going to break ties, and that results in confusing moments. For example, Overwatch favors the shooter, so you might see someone in a position to shoot you, use a defensive ability, and die even though you used it, because the server decides that if two events are close in time, the shooter wins. With a trusted time reference, you can actually order events from the perspective of the client instead of the server; so if you actually used that defensive ability before the shooter clicked the mouse, you don't die.
I actually think NTP might be good enough for this (despite what people say their reaction times are, milliseconds don't actually matter), but I guess game developers don't think of games as globally-distributed transaction processing systems (which is what they are, just a lot more write conflicts to resolve than your average database), and haven't experimented with ideas that are still only a few years old in that field. (The game industry also doesn't reward experimentation. If you're Google and you try to replace Bigtable with Spanner and it fails, it doesn't matter, you just keep using Bigtable. If you're a game company and your netcode is janky, you launch late, a competitor releases a similar game before you, and all the money you spent on development is gone.)
With games there is always the trust issue; can someone write a client that lies about the time they took an action? The answer is yes. But if we have technology that relies on similar client trust working in high-frequency trading, it should be safe enough for games. The stakes are a lot lower in a computer game than the financial markets. So I think good things are on their way.
You still cant trust the client here, no matter how precisely synced your clocks are, because it could maliciously reorder events/tamper with time (always claim you activated first, with a certain fudge factor to prevent detection) and it would be undetectable within the bounds of internet latency.
I'm not completely convinced that cheating would not be detectable after aggregating some statistics over time. (Similar to how you can extract encryption keys from server processes, simply by timing how long certain operations take.)
HFT aside, this also has interesting applications in distributed databases. Spanner, for instance, has consistency guarantees directly related to how synchronized the clocks are (keyword: TrueTime). It seems plausible that Huygens could make a significant improvement on performance of this kind of distributed database.
This looks like a very impressive result. NTP has been doing its thing well for years but a factor of 100 improvement on time accuracy would be amazing,
> In this paper, we present HUYGENS, a software clock synchronization system that uses a synchronization network and leverages three key ideas. First, coded probes identify and reject impure probe data—data captured by probes which suffer queuing delays, random jitter, and NIC timestamp noise. Next, HUYGENS processes the purified data with Support Vector Machines, a widely-used and powerful classifier, to accurately estimate one-way propagation times and achieve clock synchronization to within 100 nanoseconds. Finally, HUYGENS exploits a natural network effect—the idea that a group of pair-wise synchronized clocks must be transitively synchronized— to detect and correct synchronization errors even further.
Not an expert, but this seems quite a complex system. Since HF traders have huge incentive to game the system, my fear is that the next headline about Huygens will be about a new exploit found in the wild.
I don't think the article answers amelius's question, which is not about how you would achieve that precision but how you would use it. On an OS that only supports recording timestamps up to microsecond accuracy, what is the point of synchronizing your clocks within nanoseconds of each other, even if you could?
On linux you can get nanosecond timestamps, and so could record them. This could be useful for capturing certain events. Is that what is meant? IIRC Windows XP didn't support resolving timestamps smaller than 8ms but we've moved beyond this.
Uh, I know it's a joke, but isn't geostationary orbit almost 36K km over the equator? The round trip time to NYC would be over a quarter of a second...
An interesting side-effect of this, is that it would enable a standard of synchronization, across geographic regions, such that one could treat a set of virtual machines as one ultra-wide-bus CPU with a 1 GHz clock speed.
All of the local overhead of real system resouces and network synchronization could handled by the remainder of the real CPU clock available to the bare metal, but contribute to the computation of a segment of a virtual bit field, at speed.
So, now maybe we get a commodity 4096 bit 1 GHz CPU as a service. Which, is maybe comparable to a 64 core processor, but without the overhead of chunking down to the width of 64 bits.
"...across geographic regions, such that one could treat a set of virtual machines as one ultra-wide-bus CPU with a 1 GHz clock speed."
I'm not entirely sure what you're trying to say here, but I am entirely sure that it's wrong.
A precise clock isn't the same thing as the removal of latency, and the operations of a CPU are ordered. That is, I can't start working on the multiplication of A * (B + C) until the addition result is available. Furthermore, if the elements of the operation, B and C, or parts of those elements, were separated by miles (or even feet), the latency of that operation would increase by orders of magnitude.
I doubt that even a 1MHz distributed processor would be achievable as a large distributed bit field computer as you've laid out here.
If you're worried about overhead in computing, it is critical to remember that a foot is a nanosecond. I'd much rather break my data down to register size (and I often do) than ship my data over a wire or fiber (which I also often do).
Actually, if you marshall all of your addressible units up front (4096 bit sentences, instead of 64 bit words), which aligns well with raw allocation units on many file systems, as an end user of the service, the overhead (to you) is reduced to network I/O if the product is built correctly.
The only hard part requiring serialized synchronization is the carry bit, across compute nodes. Share the carry bits between nodes, and while relaying a sentence to a cluster of synchronized nodes, the pipeline can shoot the sentence into the cluster as a unit, proxy and chain together the carry bits with a coordinated execution plan, and on the other side of the pipe, you get your well-timed 4096 bit result, all at 1 GHz, because the service is designed and produced to handle input at nanosecond intervals.
What are the advantages? Predictability, and expanded throughput.
Now you can look at an entire passage of text and make a determination about it in less time. Or stack many passages and composite them to assess or intuit variation. Designing the product this way makes it easy to reason about, and thus easier to market and sell. Is it possible to make a profitable system that works like this? Gee, great question! There's no obvious answer.
But anyway, from the perspective of a subscriber, it's on them to marshall their data, and then, if they have operations for which the scale of 4096 bit chunks improves results, they can get their granular operations done at 1 GHz, which allows them to predict time spent and overall cost more easily.
(e.g. I have all these [less-than-but-up-to] 4096 bit toots marshalled in a single data store, from a shit ton mastodon instances (i did all the crawling and retrieving, and saved them in one place, as a standardized data set), and I think this fact might be true about some of them, here is the rule set to interpret, please give me back the members of the toot array that return true when the function of this rule set returns true)
BTW, don't get hung up on 4096 as "the best number" I just chose it because it's a nice square number.
"The only hard part requiring serialized synchronization is the carry bit, across compute nodes."
I don't think that's the only hard part. Branches, for instance, are rough.
"What are the advantages? Predictability, and expanded throughput."
I think the system you've described would definitely have some predictability, but I contend that it would be predictably slow. Furthermore, given that everything is going to have to be pipelined up to its eyeballs, you don't need nanosecond synchronization to achieve high throughout. Audio, for instance, often achieves higher throughout than clock. Look at the AES MADI spec for an example of this (basic link at Wikipedia here: https://en.m.wikipedia.org/wiki/MADI ).
I'm just not seeing how this is practicable, or, more critically for this conversation, how it is particularly uncorked by precision clocking in a particularly meaningful way. It strikes me as an approach that would have to deal with edge cases robustly, largely using the same mechanisms that would be necessary for imprecise clocking (but with assured sequencing).
"But anyway, from the perspective of a subscriber, it's on them to marshall their data, and then, if they have operations for which the scale of 4096 bit chunks improves results, they can get their granular operations done at 1 GHz, which allows them to predict time spent and overall cost more easily."
This strikes me as similar to the complexity sizing in Craig Gentry's fully homomorphic encryption system, in that all operation sets up to a configured encodable complexity require the same computstional effort, effectively inefficient for smaller operations. For timing attacks in cryptosystems, it actually seems reasonable to retain fixed effort, even if Gentry's original system was largely impractical.
For general computation? I think that the sweet spot between job chunking and dataset chunking for the system you've described may not actually exist.
For financial transactions, it would certainly allow for fast high-precision floating point math. Imagine IEEE 754 4096-bit floats. Not sure anyone would actually use this, and you'd still have to standardize the rounding precision, but it might be an interesting vein of research.
Still, I agree with you -- what the OP described is not a 4096-bit processor.
Now highly-synchronized VMs -- that's an entirely different matter. Probably a boatload of use cases for those.
64 bits already gives you 16 digits, that is enough for a trillion dollar to one one-hundredth of a cent. So maybe there is someone who needs 128 bits, which is part of IEEE 754 since 2008, but that then is probably enough to calculate the total of all financial transaction ever done.
The alpha calculation can (and should) use floating point math. If the market has a midpoint of $99.99 with a bid/ask of $99.98/$100.00, you could compute a bunch of signals and end up with an alpha-adjusted midpoint of $100.00383736383..., at which point you’d convert it back to fixed-point and then try to buy $100.00
A floating point representation is not really the issue, the issue is not using base 10, and IEEE 754 specifies base 2 and base 10 floating point formats and operations. But I am of course not sure whether the original comment referred to base 2 or base 10 and given how common the mistake of using base 2 floating point numbers for financial calculations is, you may be correct with the intention of your comment.
I'm aware of the fact that you don't use floating point math for finance -- for exactly the reason you described -- but the academic in me wonders if you could formally specify a high-enough degree of precision -- and all the corner cases -- to allow FP math for even just a subset of transactions. This would (in theory) allow to programmers to bypass the Decimal classes in your favorite OO language (or GMP if you're a C fan).
My point was more that it is wrong to say that financial calculations should not be done using floating point formats, for example Decimal in .NET and BigDecimal in Java are floating point formats and they are the types you should use for financial calculations. The important difference as compared to formats like IEEE 754 binary32 (formerly single) and binary64 (formerly double) is that the representation is based on base 10 instead of base 2. Fixed point or floating point and base 2 or base 10 are two orthogonal choices.
So when you initially mentioned high precision floating point numbers for financial calculations that was not necessarily a bad idea because you might have thought about base 10 floating point numbers. The comment I replied to however assumed you meant base 2 which of course most people do if they say floating point numbers without specifying the base and which of course is a bad idea for financial calculations more often than not. I just pointed out that assuming base 2 is usually but not technically correct.
And you can of course use base 2 floating point numbers for financial calculations - 32 bit, 64 bit, or 4096 bit - you just have to keep track of the accumulated errors and stop or correct the result before the error grows into the digits you are interested in. But why would one want to do this? The only thing I can really think of is that you need maximum performance and there is no hardware support for base 10 floating point numbers. And just using integers as base 10 fixed point numbers, which would often be a even better solution, must not be an option.
I don't know how you found your way onto the addition operator (+) on your keyboard, because that's not at all what I was driving at.
I think you are... JUMPING! TO CONCLUSIONS! (get it?)
Anyway, at it's core, much of the logic within a turing machine winds up being addition in an accumulator. So, you widen the pipeline, and that adds place settings to the numeric values addressed at a location in RAM.
I think we both know that each place setting increases the maximum valus of the addressible unit by an exponential factor of the base, which in computing, and so in this instance, is binary.
Specifically: 2^4096 instead of 2^64
Golly, did I get my math right? This sure is difficult to for me to understand!
Why would anyone want a 4096 bit CPU? Oh, I dunno. I suppose 640K ought to be enough for anyone.
Suppose you do a simple addition on your 4096bit "CPU", you have to propagate the carry from the first 64bits to the next 64. How do you do that within your clock cycle over the internet? You'd have to pipeline them so that each subsequent 64bit add waits for the previous carry, but then wouldn't it be orders of magnitude faster to just do it on the same CPU rather than taking the time and resources to do a single 64bit add followed by a high latency network transfer? At any rate what does clock synchronization buy you here exactly, data transfer are still high-latency and high-jitter, at best you're isochronous but definitely not synchronous.
Either I completely misunderstand what you're proposing or it doesn't make sense at all.
I’m not quite sure what GP is getting at, either, but I can sort of see the lockstep synchronization described letting you build something like the original Thinking Machines Connection Machine out of more distributed parts.
The original Cray supercomputers also benefitted from a design where every wire in the pipeline was the same length for “free” synchronization courtesy of the speed of light.
How would the math work on that? Simple addition now requires coordination of results across many CPUs. Worst case is N-1 ticks where N is the CPU count.
What operation would get faster by such a virtual CPU?
An organization seeking to market a product based on any spare slack or wastage of their bare metal could stitch together a niche product like this from enough resources, and price it in the space where it nets them money, and is cheaper than something an individual or small business might be capable of building on their own, with the cheapest possible parts.
That's basically the the core principle of every cloud product being sold.
First off: no. Big money plays in high frequency trading (roughly half of all trading activity), and the smaller traders without instantaneous access are the losers in this game.
Secondly, NASDAQ's obsession with precise global sequencing is A) misguided and B) effectively impossible to do right 100% of the time. Given this, I would argue that the appropriate thing to do is change the market requirements. And I'd argue that like this:
1) Temporally quantize the market. Orders come in on an open temporal window that is sufficiently long to account for global latency of non-pathological communication (sorry, tor users) and a bit of computation time. Everyone gets to swim in the same pool. Maybe one second, maybe more. Nobody gets to see the order book until it's resolved. Write-only. 2) Lock the book and fulfill orders from the set of satisfiable orders. If there just contention for a trade (there will always be some), fulfill the contentious trades randomly using random zeedig generated from a pre-announced salt and a hash of some or all of the order book for the window. 3) Return the results and the hashes of the order book, next salt, etc, for verifiability and prep. 4) Re-open the order window.
High frequency traders would hate this, because they wouldn't be able to pounce on quick movements, even without fronting slower traders.
It would, naturally, increase latency for trades by virtue of having to wait for market resolution. However, mere sequencing doesn't solve the problem of having to resolve and confirm trades (the speed of light is so cruel), so I'm left utterly unsold on the market-efficiency benefit of ultra-high order resolution. Wealthy high frequency traders want to use time to buy an advantage, and the liquidity support they provide to the markets is dubious, at best, since they pull the plug as soon as things get crazy.
Quantize the markets.