About 15 years ago, I became a full time employee of the Canadian Government. I was part of a team that was rebuilding a pretty huge and critical system. It was a disaster. I left after a year. The gun registry before it and the Phoenix pay system since, and many other failures in between..it seems pretty hopeless.
I initially put all of the blame on the expensive consultants, that were doing most of the work, for being technically awful (writing software like Fowler's Analysis Pattern book was a holy grail).
While I still think they bear the brunt of the blame, in retrospect everything was awful. The project was both a business and technical transformation at once. There were too many managers, too many business experts and overall poor leadership. Rather than breaking it down into manageable chunks, it just kept growing and growing. There was no concern or accountability for waste.
Sometimes replacing a legacy system with a new system is difficult. But most of the time, it can be done as a slow and steady transition...so long as egos and ambitions stay out of it.
I did some contract work for a federal agency in Ottawa about 15 years ago. One day in the coffee room I heard a bunch of consultants reminiscing about what a great project the gun registry had been. Coming from Alberta, where that project was a notorious and spectacular failure ($1B for a CRUD app that was years late and didn't work), at first I was shocked. But it was fascinating to get a window into how the other half lives! They were nostalgic for the gravy train. I'm glad I didn't say anything, but it was a learning experience in how one person's boondoggle is someone else's windfall.
Years ago I interviewed a PM who listed the Gun Registry and they had the same feelings, as though the project was a complete success. Unfortunately, said person did get the job... and no surprise things didn't go well thereafter.
I've had consultants submit projects and go on these long talks where they genuinely felt they had produced something awesome and awe inspiring - when in reality the output was poor. While some of this is bluster, one person's lifetime achievement is another's hungover mad dash attempt to finish a project.
Not all government departments are like that, but too many are, probably the majority.
What I've noticed on the ones that work is that they have a core group of technical managers and architects that hold in their heads the whole system architecture and organization. These are people that for more than 10 years decide what will be done and how it will be done. This seems to happen on the more financially crucial and essential areas of the government (e.g: revenue from resources). It is in the more accessory/non-essential departments where there is rotation of technical staff that the chaos thrives.
Profit centers are usually organized and efficient. Cost centers aren't.
It's very slow, but I'm always impressed by how functional it is and how easy it is to find what I'm looking for. I'd rate it 7/10, which is higher than I'd rate a lot of Google's stuff.
It’s pretty clear that it’s a new frontend talking to a very old (probably green-screen style mainframe application) backend through some kind of gateway, though. Maybe with some modern DBMSes serving as ancillary data storage, but with the legacy system definitely being the system-of-record that everything has to interface with in the end.
If a new-ish web app for a bank or government service has a “maintenance period” where it becomes inaccessible between 5PM and 9AM each day—without the actual web layer going out, just seemingly changing its access ACLs on the data layer—that’s because they’ve got the web app fronting a mainframe that was designed for explicit OLTP ingest / OLAP batch ETL phase separation. The new record event stream gets collated at night to build queriable/indexed tables. Often on hardware so slow that it needs the entire maintenance period to finish building those tables+indices. (This is why better performance in these systems is hard: if you want higher OLTP ingest throughput on the same hardware, you can have it, but then they’ll need longer maintenance windows, so long that they’ll now break their SLAs by being unavailable during business hours!)
And yes, technology as predicated on energy flux is a principle I've largely accepted. Vaclav Smil's work is singular in this, though I find Daniel Yergin's to be so as well, though that seems largely unintentional / accidental.
> writing software like Fowler's Analysis Pattern book was a holy grail
I feel like there’s something to be said for constructing an architecture entirely out of “formalized” architectural patterns, when you know that 1. you’re not going to be around to explain the architecture to whoever has to maintain the project 20 years down the line, and 2. Anyone who does come onto the project to maintain it is going to have very little time to get up to speed, since they’re very likely on a short contract to implement a single fix (“Y2K compliance” being a good example of when this happened.) At least, if all the architectural arrangements are copied out of a book, you can just put a note in a README at the project root to read said book. Might skip a lot of “software forensic anthropology.”
I mean, that, or heavily document every piece of the architecture, to the point of writing a book on your architectural choices (e.g. the reams of documentation on Erlang’s OTP-framework architecture, originally produced as a side-effect of Ericsson engineers documenting their architectural choices while writing code for their network switches.) But, well, did the customer pay for that? No? Too bad.
(I find this is one of the major differences between the way a regular IT consultancy handles these kinds of contracts, and the way a bigcorp like IBM handles a contract. The bigcorps factor things like “reams of documentation for later maintenance (not necessarily by us)” into their negotiated price, because they’ve been on both sides of that, and also have had customers who have been burned by a lack of it. But this makes their bids higher, so they’re often underbid by novice consultancies who don’t have share this mindset...)
Canadian here, but left 20 years ago, but I read CBC news daily.
That Phoenix pay system was a total cluster that lasted for years. I remember reading headlines and thinking “they haven’t fix it yet?”.
Made the ACA website in the US look like a resounding success. People were literally not getting paid for months (years?) and the gov’ts was like “working on it!”.
They still haven't fixed it. My partner works for the federal government. She was bumped up to a new pay scale and suddenly stopped being paid for 2 months. When she did start getting paid it was at the old level and she hasn't received the missing 2 months salary yet. She had to open three separate cases to 1) receive the missing pay, 2) have her pay scale updated in the system, and 3) receive the difference between her old salary and new salary for however long they continue to pay her at the old rate.
One of her coworkers hasn't been paid in almost 6 months.
> There were too many managers, too many business experts and overall poor leadership.
This is Canadian Government Bureaucracy in a nutshell. Decades of (often minor) scandals have caused successive Governments to restructure in such a way that it is nigh-impossible to assign responsibility; and have done so by making it extraordinary difficult to form clear direction and make decisions.
What we need is a massive downsizing of the public service at the managerial level, and begin placing the burden of responsibility upon departments. They need both the power to make decisions and the associated risk of failure; whereas now the decision making power is nigh-ephemeral and the risk is distributed broadly and thinly.
You're describing every internal government system migration in the world :(
Managers, subject matter experts and consultants are paid by the hour unfortunately, so the incentive is not to deliver the project, but to instead make it take as long as possible.
Consultants can be great for filling in very specialized gaps, but beyond that I don't know how anyone can think it's a good idea. Aside from a history of failure and consulting companies acting as parasites the incentives simply don't align in the first place. Why do a good job that will last years when a bad job will make you more money and keep the gravy train rolling?
Of course building up a good internal team inside a government department also seems to be frought with peril, so maybe it's the only option.
It's not the egos and ambitions, it's the oversight and accountability. You even said it yourself, there was no accountability for waste.
When the top 1/3 of the org chart is gonna get fired by the next administration regardless there's no reason (for the current administration) to hold them accountable for not doing good work (not to mention that most people aren't going to be harsh to their own political appointees) and that sort of culture rolls down hill really well.
I don't think this is accurate. Aside from the minister, they're all civil servants. For example, the current Deputy Minister of Finance was appointment by a different leader (from a different party) over 5 years ago.
Just because they are not politicians does not mean they do not get changed out. In my experience in a particular US state the top three layers (director and two levels of reports) pretty much all get changed out with the administration. Positions below that are (mostly) union positions and therefore protected by the union.
Yup. Years ago, an employer sold some software to a Canadian government department. Every couple years, it becomes somebody new's bailiwick, and I get a couple emails asking what it is and how it works. I am not sure that it has ever been actually used, but they keep paying the renewal invoices.
I work on one of the systems called out in the article. It's a fabulous disaster. I don't think it'll keel over and die anytime soon, but it absolutely should be phased out.
The problem is these big projects, like modernizing core social service systems, are given big budgets and big expectations. Then they start a big team to work on it, with short deadlines and not enough direction. At some point it is discovered that the legacy systems business logic is mostly undocumented/baked into a mainframe/functioning through sheer luck/nothing but PL/SQL and DB triggers. Then the several contacting companies are brought in to try and sort out the mess, solutions are purchased from billion dollar private companies (all of which have screwed over the Government of Canada one way or another), and project management is shuffled around as fast as HR can fill out the forms. After three years the money runs out, the project dies, and then we try again in two years.
The solution is never a slow and steady modernization. It's never small, focused teams working on improving or rebuilding individual components. It's never upgrading infrastructure, or moving to modern best practices.
It's always all or nothing. Solve the problems, without fixing them.
My past experience overseeing large Gov't system upgrade/replacement projects I can say this: A major issue is that the business/user community not only have little/no expertise in business re-engineering they are also extremely resistant to change. Many times this means that requirements are vague and generally just regurgitate what processes are currently in place - rarely are they truly wire-brushed to be as efficient as possible.
I recall one fairly large project where the Director of an affected agency would nix any changes that affected their agency. Period. Status quo was sacrosanct.
Part of the equation is that the staff have a very strong union and very few in Gov't are willing to make any change that will impact unionized staff (i.e. becoming more efficient is frowned upon).
Combine all of this with the (fairly opaque) hidden agenda of consulting agencies to keep the project going for as-long-as-possible and it's just a recipe for disaster.
There's more to refactoring than just code; when systems are this legacy and cruft-ridden the whole thing needs to be looked at from first principles and an actual workflow designed based on what's possible today or seemingly within grasp.
Only with a clear idea of what things should look like can the new structure be built, tested, (and while testing) training written and validated, and then a rollout planned (which is it's own whole other project).
Do you have practical experience rewriting those kind of legacy systems ?
Because my personnal intuition would be that trying to do both a revamp of the process, as well as performing the technical migration is the actual recipe for disaster.
I would do the migration while remaining a close as possible to the original system (removing the obvious unused functions), and only then start transforming the business process.
Studying the past lets us learn from what it took to take advantage of new technologies.
The desire for very little change (which is difficult and does need someone to push the politics of that change through) would leave us in the belt and pulley driven workshop layout making horse and carriage gear.
Also considering forward thinking, this is baseless speculation but, my gut feeling is that science fiction written today is more likely to be 'close enough' to how every day computers might work in the future that such systems would seem like a plausible alternate reality. Contrast that with what science fiction written even 50 years ago thinks about anything involving computers or automation.
That's why it seems likely that the overall workflow should be examined again including a look at what is actually needed and what tools we currently or might have to accomplish those tasks. The existing systems, interfaces, and forms are __some__ of the tools to consider, but if there are actually good reasons for evolving or replacing them those changes should be documented and made.
I'm currently working on this process, replacing 40+ year Cobol systems with modern services. It's in private business, so we probably have more flexibility than in government, but a lot of the same principles apply.
What we don't do is look at the old code, document how it works, and then reproduce that. Prior to this legacy system being built the entire company worked with paper processes and documentation, so it was a paradigm shift for how the business worked. That system is slow to update, so how they work is heavily influenced by the business' thinking 40 years ago. Our replatforming project is seen as essential for the business' continual survival, so we're allowed to question processes, simplify where we can, and work as equals with the business in defining new processes. There are definitely hold-outs and resistance from some quarters, but once you launch some successes people start converting and accepting the process.
If a system has been continuously adjusted over time (like ours) we still work from first principles. Even though we have institutional knowledge in the form of people who have worked here for 40 years, it’s often impossible to know the reasoning behind why a change was made. Most ageing code bases contain redundant logic due to some situation in the past which no longer occurs, or a constraint imposed by a dependency which will now be removed.
For example, what was once a file based batch process meant that other processes had to wait and split their processes accordingly. Replace that with an event based system and everything can run contiously and a lot of the restrictions disappear.
Another commenter summed up my feelings on PL/SQL. It's not that it's inherently bad, it's the bad practices that seem to follow.
Modernization to me (specifically in the context of the GC) means refactoring/rewriting systems to follow modern industry standards and practices. Things like proper version control, automated testing, automated/semi-automated deployments, and monitoring and logging.
I think there are major benefits in adopting these practices for both the Canadian public and the developers building/maintaining the solutions. Unfortunately, getting a budget to move the source code from a shared network drive to Git is next to impossible. And God forbid you want to spend time adding any sort of testing.
The stack itself is rarely the problem, sql scripts and triggers in these kind of systems as op says is not properly version controlled or documented. In a refactor or revamp these are discovered along the way
I work for a digital services agency currently working with several provincial governments, and have some perspective on this. I call it "the IBM hangover".
At some point, someone gives IBM or various consultants millions of dollars to implement some IBM software--Powerbuilder, Java, Domino, etc.--which they do, following RUP, which means a pile of UML docs on top of it. It actually works, the gov't says "great, we can stop funding ongoing dev for a couple decades."
Two decades later, CGI has taken over the IBM project for half the maintenance fees because they know how to do RUP, and thus meet the piles of NFRs IBM left behind (such as "all metadata changes must be auditable", meaning all UML diagrams must have a documented review/approval chain separate from source control").
I sat in a meeting where a stakeholder showed us the report generation page, and said "if we click on this link, it crashes the server, and we have to open a ticket to restart it".
Software does rot, and gov't procurement processes and requirements are those left behind by IBM to prevent smaller, "agile" agencies from doing meaningful work. To my mind, there are two flaws here: one is letting IBM in, in the first place, to set the ongoing standard; two is refusing to acknowledge that every organization needs an organic software dev capability that can use outsourcing as a resource multiplier, not as an IT replacement.
Does anyone have any idea what outdated technologies they're using (notably the the ones that are sixty years old)? It would be interesting to learn which software has managed to stick around for so long and what it does.
Two anecdotes:
1) I know someone who deals with legal documents for the Canadian federal government and she has multiple word processors (of various versions) for these documents. Since minor changes can affect the meaning of these agreements, they are only edited or amended in the software in which they were originally created. She is understandably very excited for when the ones created in old versions of Word Perfect expire and a new agreement is created in newer software.
2) Only in 2019 did the US Nuclear Command transition away from their own 50 year old technology: floppy discs from the '70s. [1]
It always fascinates me to see what sticks around, for what reasons, and how it affects people's work. I've heard stories of developers creating fancy UIs to cover up ancient Fortran software, so that it's less painful to work with, but they don't need to replace the underlying software.
Gonna take a wild guess and say old mainframes and COBOL. When I trained at St. Lawrence, they were training us on that for the Ontario government, since they had a heavy reliance on that style of technology. It was way cheaper to continue to train fresh and ignorant new students to learn the old ways than to convert to anything new and decent.
Once installed in the Ontario government, there was kind of a divide between people who wanted to move on up to the Federal government (seemingly they were not concerned about skills not being transferrable, hence my guess on COBOL and old mainframes being in use), and the other half who wanted to just put in their time in one place and collect their paycheque and pension.
Also, can confirm on the expensive contractors. IBM in there pushing Rational Rose and hardcore waterfall did nobody any favours at all.
Exactly. My first job out of university in 1996 was with the Canadian government. One of my first tasks was to update a COBOL program that had originally been written in 1972. I left that job in 1997, but I doubt the program I updated has changed much in the 24 years since, so it's at least 48 years old at this point.
> I've heard stories of developers creating fancy UIs to cover up ancient Fortran software, so that it's less painful to work with, but they don't need to replace the underlying software.
An API is an API. As long as the interface is well designed, my POST request can release a carrier pigeon on the backend for all I care.
> 2) Only in 2019 did the US Nuclear Command transition away from their own 50 year old technology: floppy discs from the '70s. [1]
From my reading of the article, it appears they are still using 1970s era IBM Series/1 minicomputers. It sounds like all they've done, is replaced the physical floppy drive with a floppy emulator. This is a device which attaches to the legacy IO bus, and appears as a floppy disk drive to the minicomputer, but the actual data is stored in flash memory instead of magnetic media.
Nuclear tech is completely different from IT solutions. Same with aerospace. 'Change' implies massive risk to the system. If something works, leave it. If it doesn't need to be updated too often, then make as few changes as possible.
You need to think of things like that like a low-scale industrial system. It needs to do what it does.
If you go to a candy-store at the beach and see a 150 year old saltwater taffy machine, you say "ooh, cool". The thing that makes a missile go up is essentially the same thing, it needs perform it's function to spec, period.
Yup. In tons of industrial settings it's common to have machines that are a century or more old.
I know someone who works for an industrial gear company and routinely reconditions things like draw bridge mechanisms or a rubber rolling mill (do they call it a mill?) and most customers just want things to keep on working like they've always been working since they've built their process around them and machine throughput is rarely the bottle neck. The improvements customers go for are modern bearings (quiet and cheaply serviceable) and if they have to have gears made they generally opt for a more modern tooth profile (potentially stronger, quieter and more efficient).
>Since minor changes can affect the meaning of these agreements, they are only edited or amended in the software in which they were originally created.
How much difference can switching from word 2007 to word 2016 be? Even switching between microsoft office and libre office, the only thing that would get mangled is the formatting, not the text itself.
I can imagine (badly-drafted) agreements where things are specified by page number or level-of-indentedness, like "Until the major conditions listed below are met, the government will supply the items listed on Page Y." If those references were inserted manually, any formatting changes could have unintended consequences. If they're done automatically (with SmartText), they might break in other software.
I would guess it's mostly paranoia--you'd have a hard time convincing a judge that the one orphan item that spilled over onto page Y+1 wasn't meant to be included--but I can sorta see not wanting to risk it.
In this case that's probably the right approach. The rest of the hardware it's connected to must all be custom and probably can't handle anything other than that specific floppy interface, even at the wire level. As long as it's documented somewhere it's fine.
On a much different scale, I ran into a similar problem once. Not for me, but for a customer. About 10 years ago I did a PC breakfix job onsite at a small business that made trophies. Their main business was bowling leagues and other local sports. The computer I was there to fix was fine. But the computer that ran the engraving machine- the one that they made all their money with- was an IBM PC/XT from the early 1980's. No hard drive, everything was on 5.25" floppies.
I mentioned that they should replace that thing sooner rather than later, but their objection made sense for their use case: The replacement would be many thousands of dollars and this old setup still worked. The computer had one job, and it did it. Who knows, maybe it's still there, still mashing away on its floppies every day. It wouldn't surprise me. If it is, it's being held together by nicotine deposits and luck.
Indeed. At the time, that wouldn't have been an option. And, given their technical abilities (I was there to fix a really minor problem) I doubt they could piece together something that your average maker-type such as us could.
IT might kill government. It has just the right mix of difficult to predict, measure and evaluate technical competency etc. wherein a small project can blow up to $1 Billion dollars.
This is where Western Governments are actually corrupt, but it doesn't show up in Corruption Transparency Index.
I'm not ideologically a 'small government' person, but I have absolutely no faith in our government's ability to do anything reasonable in IT.
Sometimes I think we need 'government in a box' IT solutions. Sadly, even if we did, they'd still labour over them in some way and make it expensive: the whole point is for vast cadres of the civil services, and consultant/lawyers etc. to suck money out of the system.
You hear this attitude all the time on HN, but never see any kind of disruption happening. After working with big companies (in many ways not so different from governments) and also a few government agencies I feel like "this is all corrupt, so we cannot do anything" is mostly a cop out, so people don't have to accept that maybe they were a bit naive when they ran around "this is all needlessly complicated, it could all be so much better and more simple!" and maybe the complexity is there for a reason.
It's there for reasons but the reasons are mostly not that good.
The big problem is that these projects are managed by people who aren't technical, who have never built anything concrete and whose gulf of cultural experience between manager and worker is enormous.
In the rest of the world this is being fixed by programmers drifting out of the enterprise and into dedicated software firms. The non-technical people in "the business" get to issue RFPs, watch slide shows and be in meetings: where they like to be. They cut a cheque and get a system they know works, they know what it does, it has a predictable price, it's a web app so it bypasses their terrible IT department. The technical people get to work for each other and bosses who are themselves former programmers, so they don't get asked every five minutes to draw Gantt charts with to-the-day ticket time estimates. Everyone is happy.
That hasn't really happened with government, probably because there are so few of them. Government IT in a box is a great idea though.
No. The Canadian government spent $1B on a simple gun registry, a basic CRUD app that barely worked. US 'healthcare.gov' doesn't need to be that complicated, but it was $2B, screwed up by a Canadian contractor, CGI. A small team of Google devs had to come in and fix it.
Most companies that screw things up that bad, will fail. If they don't, it's still their right to waste money on dumb projects - it's their money.
Government failures (at least in Canada) generally exceed those in the private sector for that reason, exposing the dire systematic problem if 'no competition, no oversight, lack of competency' on a scale rarely seen elsewhere.
Not only is there 'no incentive' to fix problems, often there's also a negative incentive.
It's 2020. The technology to put my medical history online has been available for 20 years. Ontario, Quebec etc. have still completely failed to do this. I still have no easy way of finding out which clinics are available for me, and when I do go to a new one, they have to open an entirely new file, totally unaware of my historical medical issues. To make matters worse, it's literally illegal for me to pay anyone to provide me with medical services. It's kafkaesque.
A very basic medical history system, that merely documented doctors notes etc. could be done 'on the cheap' (relatively speaking) - but it's far from happening.
Even an intelligent regulatory mandate could solve the medical records issue, i.e. providers must participate in XYZ system, with ABC components, designated by the government. But we can't even have that.
It's really bad and I don't see any path to getting better until government develops a whole new attitude towards IT.
I don't think your information is up to date. I'm in Quebec, and my hospital and its clinics have all their information available in a portal. Test results are communicated electronically between sources. I don't speak French well so don't use it, but apparently there is also a provincial level portal. [1] gives an overview of how it all fits together.
Granted, it is all happening slowly. I worked in ehealth in Ontario around 2000 and they were providing huge incentives for organizations to go digital, but most didn't want to due to habit, and because each institution is run as an empire.
And ultimately, the results from each system, even if it's the same test, are often not really comparable, so the dreams of results that can be compared by the consumer, and precise reasoning systems for actual AI are another generation after consolidation.
Replace government with large, complex organization. There is litle difference. I've worked in government and large companies, and they are pretty much the same.
The only difference is the governance structure. In .gov, you tend to have professional / civil service people at the senior director level who know their business inside and out, with a political layer of management who drive change and vary in competence.
Medical records are a great example of how .gov/.com doesn't matter. When stuff gets complex, IT sucks.
"Replace government with large, complex organization. There is litle difference. I've worked in government and large companies, and they are pretty much the same."
I don't agree at all.
Big corporations can fail to do many things where it's not really important, so it might seem like 'failure' but mostly it's a function of market conditions. Other big failures (say Boeing 777) are understandable due to complexity.
Very few groups on earth can build such airplanes.
Anyone can build a gun registry.
I loathe how long it takes my bank teller to speak to me, but my banking services are in the end, amazingly cheap for what they provide.
Governments do a reasonable job at things like contract allocation for road maintenance, some kinds of construction, but they generally do a bad job operationalizing anything.
Yes, it is absolutely true. Literally from the Canada Health Act: "Private health insurance plans are prohibited from duplicating coverage for health services provided in Canada which are insured under the Canada Health Act."
If you start to provide services normally covered by the government, you will be shut down, or you'll have to take your case to the Supreme Court where this law is still being tested.
There are places that provide parallel services, they operate in a grey area. For example, the Supreme Court of Quebec ruled that private services can be provided for treatments wherein the government does not provide 'timely service' i.e. 'wait times are too long'. But exactly the parameters of those 'wait times' nobody knows, and the only way to find out is to go to the Supreme Court. So not a good business plan.
The way businesses solve this, just about every time, is to run an emulator of the old system on current hardware.
I have heard of systems with five layers of emulation, yet still quite a lot faster than the original machine.
I gather Bloomberg has begun to do this, emulating SPARC user-space on Intel so they can stop paying Oracle to support long-since EOL'd 32-bit equipment and OS. (Or maybe some sort of hybrid.)
I can understand why no expensive consultant would suggest it; it is very cheap, and quick to set up. No fat contracts there.
I wish we'd develop a similar agency to the US Digital Service. I think any modern government should have a digital/IT branch... a MIO (Minster of Informatics) etc. I know it's idealistic but I think we could pragmatically solve Canada's digital issues carefully without costing us billions in consultant-overloaded failures.
There's the Canadian Digital Service, modelled after the UK's Government Digital Service (GDS). Provinces like Ontario are also creating their own Digital Services groups, and Ontario's online services have improved tremendously because of it.
That's good to know. I was/am interested in working there. I might apply anyway just for the experience (currently not working in software development, but I'd like to get back into it).
Shared Services was setup to handle the (relatively speaking) easier part of the "digital issue": infrastructure. And, by public accounts, it's been a disaster.
Consultants in Canada are usually friends of the government in power and their contracts are usually narrowly specified to make the selection process converge to their group. We had a useless proprietary educational computer system in the 80's and 90's that failed The ICON - note the last three letters of the name!!!. They appeared visionary to the public in press releases, but they were proprietary Kludges that endured, and froze their state of technology while the field raced ahead. They never worked.https://en.wikipedia.org/wiki/ICON_(microcomputer)
But the consultants were all over it, like ticks on a steer, they grew fat.
The Ontario Archives refused to accept the donation of hardware and SW for saving = dead as a doornail
I recently got a visa to Canada. The J2EE applications for the visa applications didn't work for a full weekend. I've opened some tickets, and found cookie-clearing as a fix for some issues.
Finally went there for a sofware event and people in Canada are sweet, calm, happy and non-stressed.
I feel that one of the issues that's never talked about is how government contracts are awarded. 'Analysts' and 'Architect' prepare a document outlining the spec of the program. Once the document is out, every consulting firm is free to access it and place a bid on how much they would charge to implement what was outlined in the procurement document. The government is then forced to pick the lowest bidder. It doesn't matter if what's asked by the government doesn't make any sense, the engineers on the contractor's side are forbidden from contacting government employees (for anti-corruption reasons).
I've heard many stories where the contractor knew the job would have to be done twice the moment they read the procurement documents. But they couldn't voice their concerns. And if they suggested doing what they knew was the right thing that would have made them ineligible for the contract as it wasn't what was required. Future-proofing the bid or trying to deliver something closer to what they ended-up shipping was also not possible because this would have made them more expensive than the bidders following exactly the request. In the end they ended-up rewriting most of the code at their usual billing rate on top of the original fixed cost contract.
In the case of Phoenix, I've read a lot of media articles outlining how bad of a job IBM did but despite all this it seems the contract itself was never challenged in court. What I heard from internal sources is that IBM did ship correctly all what was asked for in the contract, it's just that the government workers drafting the requirements didn't understand their own payroll needs enough to properly articulate them.
Of course CBC, the state-funded media where everyone is on the government's payroll, won't outright blame their bosses. But they would get sued and lose pretty bad in court if they claimed the contractors didn't deliver what was in the contract. So you get articles with a weird spin where they try to blame the contractors without going too far and paint the government as a victim.
I believe that in Ottawa, there's a negative stigma with those who are in tech and part of government because of so many of these systems. Although it's sad to see, it's not completely unwarranted. I'm about to get an early start on my taxes and Canada's tax website is a slug and spits out errors, timeouts, etc. It's been like this for years. Pressing refresh for the 4th time and it manages to load the page I want slowly becomes the tech variant of Stockholm Syndrome. I am biased, yes, and that comes from countless frustrations dealing with government tech.
Years ago, I had a past employer that would immediately toss resumes from applicants that had recent employment with the government. Yikes.
Thankfully non-government industry is exempt from this kind of thing. For example, the airline ticketing industry has kept up with modern technology and paradigms with its corruption-free SABRE system. Banks these days exchange trillions daily through modern full-stack phone apps.
This is for the old timer banks, the ones that had their HQs built around that mainframe. Take newer fintech banks and you're in for a treat. Some of the backends are so janky it's only a matter of time before one screws up badly. Most of the effort goes into the app-polish because that's the visible part that makes it into the media.
I initially put all of the blame on the expensive consultants, that were doing most of the work, for being technically awful (writing software like Fowler's Analysis Pattern book was a holy grail).
While I still think they bear the brunt of the blame, in retrospect everything was awful. The project was both a business and technical transformation at once. There were too many managers, too many business experts and overall poor leadership. Rather than breaking it down into manageable chunks, it just kept growing and growing. There was no concern or accountability for waste.
Sometimes replacing a legacy system with a new system is difficult. But most of the time, it can be done as a slow and steady transition...so long as egos and ambitions stay out of it.