I would love to know which cloud companies, such as Amazon AWS, Azure, Google Cloud Platform, etc. use Cisco at any critical part of their architecture.
Because Cisco has proven over and over again that they can't be trusted. Or has something changed these last couple of years that I do not know about regarding how they handle security?
From my understanding, Cisco has been taking security very seriously ever since the Snowden leaks.
They essentially have an entire team dedicated to designing and implementing security measures for their devices. However, the focus seems to be on hardware and low-level security. I don't know if they have an active high-level software security team.
As a security consultant, it makes me sad that the criteria for "taking security seriously" is as low as just "having an entire team dedicated to ... security".
Every company should have a team (or depending on size, at least a person) dedicated to the security of software and infrastructure. Even the most traditionally slow-moving companies these days have security teams. If you do not have this, you are behind and are laughably unprepared for the modern age. I wish the public would start recognizing this and publicly shaming companies that don't take security seriously.
> Every company should have a team (or depending on size, at least a person) dedicated to the security of software and infrastructure.
Two competitors in a space. One has 3 developers and 1 security guy, the other has 4 developers.
In 99/100 outcomes, all else being equal, the 4 developer guy will add more features and outpace the other team and win the market.
Security is an overhead and a drag on company resources.. until it's not.
But how many startups fail due to lack of a security team? I cannot think of any from my RL network. I am sure some exist... but otherwise, this is the same as most "safety" measures... unimportant until it is really important.
Mainly b/c there are no real downsides/penalties to leaking data (or PII) - you say "sorry" and just move on. Sometimes with an increased stock price - see Equifax :(
Depends on the industry. In some heavily-regulated industries, there are huge fines at stake for leaking information.
Look at healthcare. If information leaks, it has to be reported to the feds within 24 hours of initial discovery. Not C-level finding out, but whatever low-level employee finds it. Even if it's one person. If it's more than around 10 people, there's a federal requirement to report it to the local media.
I used to work specifically in the healthcare space doing cleanup work for companies that had been recently breached. I think "huge fines" is quite an overstatement. The largest HIPAA fine in history for a data breach was for the Anthem hack, which was a leak of ~80 million records and Anthem was fined $115 million. That's no small number, but at the end of the day it's still only less than 5% of their yearly net income. And Anthem is a huge outlier: the second largest HIPAA fine in history was only ~$6 million. It's not exactly a huge deterrent for companies that want to ignore security.
In terms of the public exposure, you'd probably be surprised at how many healthcare insurers/providers have data breaches but you never hear about them because these companies know how/when to report it so that it ends up nothing more than a footnote on the back page of the local paper.
They're also a category where security is inherently a user-visible feature of their products (and as such, it _should_ be easier to justify security engineers)
In other news, in 99/100 occurrences of the phrase "99 times out of 100" the person stating it is pulling numbers out of their ass to sound authoritative.
If you think the vast majority of the time the extra developer helps that company succeed and outperform the other company, then perhaps you should start with "I think the vast majority of the time..." instead of making up magical numbers. Sure, we all know the numbers are bullshit, but that's not excuse where there are perfectly normal ways to express what you mean that don't masquerade as fact.
> Security is an overhead and a drag on company resources.. until it's not.
And sometimes that "until it's not" could be from the very beginning, if it's used for market differentiation. As soon a you talk about competition and businesses it becomes about a whole lot more than technical merit and features. Things like current market position, advertising, strategic partnerships, etc start having an impact, and sometimes it's a large one.
> In other news, in 99/100 occurrences of the phrase "99 times out of 100" the person stating it is pulling numbers out of their ass to sound authoritative.
I can use whatever writing style I want to. If you don't like it, read another post.
Sure you can. And I'll free to call out when you are making factual assertions that aren't facts. In the same vein, you're free to ignore my comment.
I'll concede my original comment might have come across as a bit more harsh than intended, but I think the point stands. If you're trying to prove a point or provide information in a discussion, don't use loose and hyperbolic terminology without expecting to be called out on it.
Doesn't his "99/100" give us a much better idea of his confidence in his own knowledge, compared to something more vague like "usually" or "the vast majority of the time"?
I ask because I've been studying how to make better decisions, and one of the techniques is to use numbers instead of the easily misinterpreted words. The famous example is the Bay of Pigs decision -- Kennedy was told the invasion plans had a "fair" chance at succeeding. The advisor was later interviewed and said that he meant 3-1 against as "fair odds", which could easily have changed Kennedy's decision.
> Doesn't his "99/100" give us a much better idea of his confidence in his own knowledge, compared to something more vague like "usually" or "the vast majority of the time"?
Sure, it gives an idea of confidence, but we haven't established that confidence as worth anything yet, and we're being fed numbers to add an air of legitimacy to a statement which doesn't really have any otherwise. Unless someone has actually examined 100 cases, it's just bluster. It's like saying "oh, that never happens" to discount an argument entirely. It's not helpful to a real discussion.
> The advisor was later interviewed and said that he meant 3-1 against as "fair odds", which could easily have changed Kennedy's decision.
The adviser was likely also someone with previously established expertise in this area, which means that opinion might be worth something without supporting evidence. Additionally, giving actual predicted odds is different than falling back on on a colloquialism that almost never literally means 99 times out of 100 or 99%, but instead is meant to convey "almost always", and using terminology like that in a conversation without any supporting evidence isn't useful.
I agree about this particular comment -- your interpretation is likely correct. I was just trying to connect it larger, to the decision making skills that I'm working on (and thought might be interesting to a HN audience).
FWIW, the Pigs incident is widely cited as an example within this sphere. It has nothing to do with whether the advisor was previously skilled -- it has more to do with thinking about how people understand and use the information I communicate. (and that using numbers instead of words seems to be considered "best practice" in the decision making community).
I'm going to disagree on the basis that segmenting your security personnel creates a lot of difficult social dynamics. However the model is structured, "security" becomes a pain point, and no one wants to see the security guy coming around the corner.
The default interpretation (which is basically the only one that counts) is that this guy is coming around, tearing down _your_ work, and making you look bad in front of everyone, and what does he even know about this thing?
Some people will chicken out, not press the matter, and then the "security people" are just the fall guys, unable to do anything other than take the blame for the other peoples' screw-ups. In my experience, a lot of internal security people end up stuck in that position whether they're trying to fight the good fight or not.
The dynamic is similar to the corrosive POV that sees "dev" and "ops" as orthogonal concerns to segment out, and we all know how well that's gone over. Anyone looking forward to hearing all about "DevSec" in the next 10-15 years, once some startup finds a way to put a cute icon on it and gets a big pile of investor money to burn (cf. Docker)?
Rather than create a tense segmentation, it's better to just hire competent developers who know security well from the get-go. Maybe they can rotate weeks doing security reviews or something like that.
I don't know, I don't really have a good solution. I just know that making someone the enemy by labeling them "security personnel, here to block your deploys, tell you you're doing your job wrong, and generally just embarrass you in front of your peers and superiors" is not going to help much.
I agree, but I think you overestimate: (1) how many companies actually care about security and (2) how recent this security-focused movement among large companies is.
Furthermore, Cisco deals with security problems that few other companies worry about, such as the wide-scale availability of counterfeit devices and convincing foreign customers that their devices are backdoor-free.
(3)What small firms’ margins of contribution allow, given the competition in their markets.
I for example run a family business that operates in the paint sector in Italy, and you’d be horrified by how compressed margins are (and have become over the past five years of debt crisis and economic recession).
There’s always this aura that a rational firm can arbitrarily add structural (HR) costs to defer some nebulous, future “risk”, but in my experience this is not the case (anymore). Competition is so hardscrabble and survival is such a luxury that pretty much everybody is not only willing but obliged to skimp on everything. Being “with the times” simply isn’t an option, sadly.
Companies won’t take security seriously until 1) customers demand methods of evaluating the security of a company and making purchasing decisions accordingly — “size of security team” doesn’t count, and 2) sufficiently punishing companies that cause user harm through security failures, i.e. forcibly liquidating Equifax and confiscating all senior executives’ assets including their house.
In a large company, a company with sufficiently-sensitive data (where loss is a legal matter), or a company with a lot of money to lose (financial processing), having a dedicated security team might make sense and might be worth the money, but in most situations, I think it does more harm than good. Security teams rarely know how applications work, so their approach to security is heavy-handed and provides a false sense of protection. It's more important to instill everyone with security-minded philosophy, so that applications and systems are designed with security in mind from the beginning.
Firewalls, packet inspection, virus scanning, and the like would all be completely unnecessary if code was written securely and if human beings behaved properly. Of course these things don't happen, and of course firewalls and scans and the like are common-sense protections, but most data loss is caused by poorly-designed applications which run in very secure environments. Dedicated security teams simply aren't in a position to identify and solve problems like buffer overruns or SQL injection.
>Security teams rarely know how applications work, so their approach to security is heavy-handed and provides a false sense of protection.
Yes, sometimes there are overly bureaucratic processes in place, but as a security guy, do you know how many times we see devs self-SQL injecting themselves on a webpage? How many times they ask if their service needs to be in the DVZ? (What the fuck is that; I think they mean DMZ)? Or Why it's a problem that they're asking why they can't have "The Internet" when I ask what services their server needs externally?
Don't act like devs (or even IT Ops as a whole) has their shit together. They're at LEAST as security-crippled as security EVER is over-paranoid.
And as for this...
>Firewalls, packet inspection, virus scanning, and the like would all be completely unnecessary if code was written securely and if human beings behaved properly.
First, code will never be written completely securely; second, security is a spectrum of mitigation therefore ACLS/firewalls/scanning will ALWAYS be necessary; third, humans NEVER behave properly, inside or out therefore security will always be in business.
>Security teams rarely know how applications work, //
I'm amazed if anyone working in a computer security role doesn't know about SQL injection attacks and their mitigations. You'll be telling me they don't know about salting passwords next??
Where I work, yes, the security teams don't know how applications work. we have a dedicated security team - they run nessus scans for compliance and network scans and hand them to managers and executives who hand them to us. they explicitly don't touch the code - a senior guy has declared that he's not responsible on that level, that he's 'no coder' in response to a shell injection vulnerability. so he knows 'about' salting passwords but he probably couldn't verify if they are salted or not.
To me it's left the impressions that 1) security is mostly theater and 2) 'security engineer' is a meaningless title - it can mean anything from 'runs scans' to 'network administrator' to 'penetration tester' to 'good engineer'. unfortunately my default attitude towards the title has been "i'm probably dealing with a charlatan"
Yep. The vast majority of "security" folks I've worked with could be described as "compliance" folks instead.
Almost nothing is done for actual real infosec, and in many cases their existence actually is a step backwards.
A real security minded company (and I've worked for some) don't have such a silly thing. Everyone is considered part of the security "team" since that's the only way secure services and software actually get built.
Hiring some random dude to run halfass tools against hosts and generate reports is simply checking a compliance box and only serves to act as a CYA for the organization and more importantly it's officers. Almost no one actually cares about real data security, in my experience.
Calling this security engineering is a misnomer. Yes, I agree the way you describe it this is pure theater.
I take a bit of a long position, please bear with me:
A lot of engineering is plagued by the notion of theater, forced either by outer or inner (see management) pressure to project an image. Examples: "Everything you see is rendered online", "our investing plan is done by this awesome bot" etc. With security really a lot of times it is hard for people to see the incentive. Loss is hard to quantify for people and then there is the extreme optimism. "Oh we are using TLS and we have our data on AWS, we do not need anything else." It is a field, that is hard to quantify that focuses on rare events.
And then consumers just don't care about non-tangible things like their data till it's too late. What happened to Equifax? If Equifax had lost directly an equal amount of money to the damages it will/has caused with the data loss, response of the public would be more drastic and there would be civil charges at least.
Same for Trustico. A guy decided it is ok to send a crazy amount of secrets over email. Then people realize the guys expose the freaking shell and root. And now Trustico is back in business (still unbelievable to me...). Thus companies have little pressure to do the job right. And companies tend to satisfy the minimum necessary requirements.
A final reason also: too little supply. This field has a lot of specialization and there is a bunch of theory and skills required. And on top of that you need to have a good understanding and skills from the other fields you touch (e.g. linking, compilers, web development, distributed systems,...)
Similarly, to be a graphics engineer you need to know some computational geometry and then a bunch of extra stuff depending on your specialty. I think it is hard for someone to pose as a games engine engineer and have no idea about convex hull operations. However, due to the reasons above a person can pose as a security oriented engineer and say a couple of hot words and get a job in some companies. And from experience management is filled with people like "What why aren't we encrypting passwords?" or "We need to have an option for people to opt out during clientHello and just use HTTP."
Regarding your senior guy: He is not a security engineer or whatever. The way you describe the situation and taking a positive vantage point, I think the management had to sell to some outsider that they care about security and hired a penetration tester maybe gave him a title that could as well be "Lord of the Stars and Black Holes" and went on with their lives. Why? Because they are not accountable regarding this view of the product.
It's hard to bullshit people about an app that is not working. It is easy to bullshit people about an app that is barely functional and sends their data to the pockets of third parties.
From personal experience that is not the case for some serious companies. And there are some extremely good security engineers/cryptographers/... engineers out there.
The thing about security is, it does not work well to try to just add it "on top". You have to overhaul the existing teams and processes, to basically be mindful, to think about this stuff and care.
Just enumerating all the stuff you need to not do (e.g. "don't call system() with user input") is inefficient and not completely effective. In my opinion. "Scans" turn up a bunch of false positives and false negatives.
There's a new-ish book, "Secure by Design", that makes this point -- and shows ways for ordinary devs to design for security without explicitly thinking about security.
I really like this book so far (it's still in the early access stage of writing).
More importantly, there's also a Secure Development Lifecycle process that basically involves everyone. Security is not an "after the fact" feature that gets bolted on right before market time, it's a requirement for any project or feature and starts at design time, and gets evaluated several times during the development of the product (and after it's released).
That's a pretty asinine way to add a backdoor. I think we can give the intelligence community more credit than that.
Plus, if you're an intelligence agency, the ideal backdoor is one you can use, and only you can use -- one that other adversaries can't take advantage of even if they know it's there. A hardcoded password is the opposite of that; once its existence leaks out, which it invariably will, it's open to everyone.
Having a developer modify some crypto code in a subtle way that happens to reduce the effectiveness of the system such that it becomes marginally more vulnerable to a certain type of obscure cryptanalysis, preferably very resource-intensive analysis that only you can perform, is the direction I'd try for...
If a intelligence service is adding the backdoor themselves, or the company is willingly working with them, sure, but I can imagine a scenario where a court orders them to provide access but does not specify the technical means. In that case, the company isn't particularly incentivised to make a subtle/NOBUS backdoor.
I'd also wonder about it being a case of deniability - a hardcoded password can be written off as a development/debug oversight, whereas a more structured backdoor/deliberate vulnerability to cryptoanalysis is harder to explain away if found.
>>>> They essentially have an entire team dedicated to designing and implementing security measures for their devices.
>>Isn't that the absolute barest minimum for a corporation of that size and infrastructural importance?
>Other than your occasional Google or Apple, I don't think so.
Cisco is a $218.62B company (about 24% size of Apple by market cap, 27% of Google by market cap) which is a "Networking hardware company". (As opposed to, say, McDonald's.)
I'd say it definitely qualfiies as your "occasional Google or Apple" in terms of "size and infrastructural importance."
I was not talking about "size" at all. I was referring to Google and Apple as examples/outliers of companies that put a huge emphasis on security in practice.
I’d beg to differ. Even in the silicon forest, I frequently come across firms with entirely dedicated Appsec and Infosec departments. Most dev shops out here have them and I’d be comfortable extrapolating that to SV and old money NY/CHI/LA and Austin based on my consulting experience.
All of this is, of course, anecdotal but it’s a tad more than a blanket refutation.
My understanding is any government vendor has to let said government at least audit the source if not provide it to allow then to modify for the equipment they purchase.
With that said , if they also control the land it's being shipped through I recall pictures of nsa intercepting Cisco boxes to update FW.
So really it's a matter of which government you want spying on you and whether you keep MGMT private or not.
Google (Actually X/Alphabet) started an entire company focused on computer security. They would have whole departments dedicated to this, rather than teams. But I get that maybe this is a semantic issue.
> Cisco has proven over and over again that they can't be trusted.
So has Windows. So has Linux. So has MacOS. Does anyone stop using these just because they're riddled with security flaws?
Cisco is a vendor just like any other. You can not reinvent every single piece of technology in the world just because you think you can do it better - not only because you probably can't, but also because it costs too much, and also because you don't have enough time.
Saying "Cisco can't be trusted" just because they have security flaws is a bit like saying "Debian can't be trusted" just because they've had massive holes that nobody else has had. They fix the holes, they put new practices in place, they move on. You don't have to stop using them, but the alternatives are not definitively more secure.
None of your examples have pretty much had hardcoded-password vulnerabilities once-a-year for the past 20 years. (yes, a slight exaggeration but you get the idea)
And only one of these, Cisco is a security/infrastructure company. So to consistently miss low hanging fruit, and violate best-common-practices...yeah, their trust level should be taking a hit.
Whereas Linux probably has had privesc holes once a year for 20 years, Cisco definitely has not. Their core products have been rock solid for decades. Only their newer, non-IOS stuff has had persistent problems lately, with a few of IOS holes.
Actually they have -
IOS has had almost 1 a year the last 20 years...compare Code Execution CVEs to the Linux Kernel, and they are damn close if you exclude the kernel's huge number for 2017 - Kernel only has 20 more than IOS. And the kernel is likely far larger than IOS
Google (corporate? maybe not Cloud) and Facebook still buy a lot of vendor gear, though it's mostly Arista. Google and FB have investments in running their own whitebox gear, but they use their own purchasing size as a carrot with companies like Arista. "If you can build this feature in 3 months, we'll buy X units. Otherwise we'll build it in house." Arista has much more resources to bring to the challenge, and they generally get the feature built quickly.
Sometimes I wonder if the "in house" networking projects are just a ruse to get vendors jumping whenever you call. The brainpower and talent that a company like Arista can dedicate to projects is much more than a tiny division inside Google or FB. Not hating on Google or FB, just trying to figure out their game.
It makes (business) sense to build your own gear when you operate at scale. Typical use cases are TORs and small table routers used for data center Clos topologies. Both often make use of merchant silicon which come with SDK's.
Still trying to understand the ToR case. They're pretty basic with enough ports, BGP, and mgmt agents to do just what you need. That said, these things are _stupid_ cheap now. The Quanta switches that AWS used/use couldn't have cost more than $200/each direct from ODM. Maybe it makes sense at scale if you only need 38 ports in a rack to build your own so that you save on power costs from cutting out the extra 16 ports or so? Improving PUE and opex might be the big driver here?
I mean, these ToR switches really are super cheap if you buy direct from ODM. For designing your own ToR, you'll need at least a team of 5-6 people each making $400-500k annually at a company like Google or FB. How much is the cost savings per ToR switch in capex? $10/switch? In opex? I think it might make opex sense.
You'll build and run your own mgmt plane OS for sure, but that's tangential to building your own hardware.
Right - the hardware is already built. PHY, chipset, host CPU/disk/RAM, etc. It really comes down to the whitebox being selected. Facebook does do some hardware design, but it's more of chaining multiple merchant silicon chipsets together or building modular boxes that can house multiple chipsets.
Control-plane/management-plane software is often written by these companies or by contractors.
NICs are another story. MSFT and AMZN both build their own NICs using either FPGAs or custom silicon.
It is somewhat telling that we keep finding hardcoded credentials in their gear. An honest company making an honest mistake, after the first or second time this came out, would have ordered a review to ensure that no further instances of this are in shipping code.
A lot of their money is/will be coming from SDN, virtualization and security, so they are definitely not limited to big metal boxes. But that's what they specialize in, and that's what most of the whales rely on.
I would bet they all do. Yes they use custom hardware for some intra datacenter tasks, but outside of that i think they will all have Cisco, to one degree or another. Mostly great degrees.
Cisco has had a bunch of boneheaded security issues "recently" (last 2 years or so, by my casual observations). A quick Google search will find several such examples.
Cisco has 800,000 customers and thousands of active products in the market. I think Cisco does an exemplary job on how it handles security both proactively: in design, in development, in production, and in response to disclosures and discoveries. Same with Microsoft, and the other few companies on the planet who operate at this scale.
I think it's easy to find our mistakes, but keep in mind our corporate culture is to shine a light as honestly and transparently as possible on security issues, and we strive to err on the side of caution and communication.
> The flaw can be exploited only by local attackers, and it also grants access to a low-privileged user account. In spite of this, Cisco has classified the issue as "critical."
I find this reassuring. It is the opposite of downplaying security issues.
Cisco in the past few years has been good about handling security incidents, although it makes you wonder why they keep adding in hard coded password to their gear. Unless it's individual devs adding them in without documenting them or telling anyone else on the team about them.
I'm sure the executives take issues like this incredibly seriously. None of this stuff is insidious in nature, it's just what happens when people bypass processes.
Engineer doesn't take the time for proper password management -> Password gets left in source -> Other engineer who does code review misses password -> this continues for several iterations -> product gets released.
Unfortunately this definitely happens more often than you would want to think.
Couldn't the same have been said about Microsoft say pre-Vista?
Of course taking security seriously doesn't magically make you have competent staff and eliminate embarrassing vulns, Windows has still had its share post-Vista. A lot of "taking security seriously" can just turn into security theater cheerleading and focusing too much on certain processes (especially response over prevention[0]) without ever doing effective threat modeling.
[0] You fixed a reported admin-attacking-admin XSS bug within the SLA, good job! You're also letting admins upload binary blobs you then parse, has anyone run a fuzzer on this to help uncover any potential code execution bugs? Does anyone even know what a fuzzer is? No? Carry on... Until something gets reported.
In the decade around the 2000's, we went from almost zero computer in the developed world, to virtually every household having one and they're all permanently connected through a high speed network.
That's a new universe of unplanned threats and attack vectors. None of this was anticipated when operating systems were designed, a few years before release.
I wonder given the prevalence of Cisco globally would it be worth a state getting an individual dev to insert the hard coded password or even an exploit broker given how much a zero day can sell for
The paranoid in me thinks that the low-privileged user account isn't as benign as some may think. If someone hard-coded a password on purpose they might have done so knowing of a vulnerability to escalate privileges. Then if it gets discovered, well, it was only a low-privilege account no big deal.
"Cisco" and "security issue" go together like peanut butter and chocolate. Knowing this, and with the knowledge that IOS is some form of arcane torture, leads me to wonder: why hasn't Cisco been completely obsoleted by Juniper or other providers?
Actually they do, but those sales people will make sure payoffs are part of the purchase, and even if you get fired for calling out the conflict of interests, it is usually harder for an honest employee to fight a multi-billion dollar company that has professional propaganda departments.
Maybe because Juniper loves those Cisco backdoors, too.
Back in 2015 they pretended to be shocked that the Dual_EC algorithm, of which people warned it may contain a backdoor since 2007, and then it was confirmed by Snowden's documents in 2013, could actually be exploited by the baddies.
> Back in 2015 Juniper pretended to be shocked that the Dual_EC algorithm
To complete your point:
Security researchers solved the mystery around a sophisticated backdoor embedded in Juniper firewalls. Juniper Networks announced on [Dec. 17, 2015] that it had discovered two unauthorized backdoors in its firewalls, including one that allows the attackers to decrypt protected traffic passing through Juniper's devices. The NSA may be responsible for that backdoor. Even if the NSA did not plant the backdoor in the company's source code, the spy agency may in fact be indirectly responsible [due to] weaknesses the NSA allegedly placed in a government-approved encryption algorithm known as Dual_EC. The Juniper backdoor is a textbook example of how someone can exploit the existing weaknesses in the Dual_EC algorithm the security community warned about back in 2007.[1]
IMHO, the dual EC thing, as bad as it was, is completely different than the sloppy crap like a litany of hard coded passwords. One of these things has political complications, albeit ones that should not matter, and the other is an issue of basic engineering competence.
They still enjoy name recognition from the era when they were pretty much the only player in high-performance enterprise-grade networking equipment. That era is long since over, but there are still a lot of networking people who came out of and are still coming out of Cisco's Netacad and aren't familiar with other vendor options.
There is also a generation of networking people who went to the "cheap" DELL and HP equipment only to realize they didn't have all the advertised features.
Juniper and Force 10 were nice, if only they could be bought outside of the USA and by non English speakers.
That's what I recall from a decade ago. Can you name the options that were so great?
The reality is that like most very large software organizations mediocrity is the default. Based on the hires I personally know of recently, even Google is quite obviously and visibly well underway in the process of mediocre-ization.
Fill your company with mass hires to get "area under the curve." Make vanity hires. Slant compensation to the top 10%. Hire only the desperate who couldn't find work elsewhere when doing school recruiting but consider yourselves "successful" in college recruiting. Have people who are so institutionalized they not just mostly have never worked outside, they wouldn't be able to. Congratulate Foo for 22 Years at XXXX!
Hire tons of low cost contractors. Be that company where teams of inexpensive contractors don't just "help out" on projects by making everything harder, they go on to work for the company for 16 years. Build out in lower cost locations without imposing a bar - after all, engineering hours are fungible and even if the quality is lower you can do 3:1 - and they'll have mentoring from the Valley engineers assigned to turn them into successful teams. It's just engineering.
It turns out that mediocrity in coding is very, very bad.
When I was learning oracle (the db) an engineer told me: "The password is scott / tiger". I was like: "That seems easy to guess." Him: "No it's hard coded into oracle - it works everywhere!"
From cisco-sa-20180307-cpcp: "...an unauthenticated, local attacker ... could exploit this vulnerability by connecting to the affected system via Secure Shell"
This article incorrectly refers to ACS as a firewall system. ACS is used primarily to control management access to Network Devices via TACACS+ or RADIUS. It offers no firewall functionality whatsoever.
This is arguably worse than it just being a firewall. I imagine that it wouldn't be a huge leap for someone to use this exploit to create a local account and policies to give themselves access to every router, switch, firewall and appliance in an enterprise.
Also, while there are no workarounds to that software version, Cisco has released free software updates that address the vulnerability described in this advisory.
The guys over at packetpushers network (checkout their podcast) have been railing on Cisco for their software quality for years. This isn’t surprising to me.
and yet Cisco remains a big sponsor for them. The balls on those guys: Literally laughing at the hand that feeds them.
"This vulnerability affects Cisco Prime Collaboration Provisioning (PCP) Software Release 11.6 only. No prior builds are affected by this vulnerability."
This part smells fishy. Probably we will never learn how it is introduced for this build.
This is not something you can expect ever to find out, burning with curiosity though we may all be. Since there's never a guarantee that every vulnerable system has been patched, it's very unlikely that Cisco will reveal the password. (It could, of course, be leaked somehow, but you will never find out officially.)
Because Cisco has proven over and over again that they can't be trusted. Or has something changed these last couple of years that I do not know about regarding how they handle security?
Edit: I forgot that Google use their own hardware, but the article doesn't mention if they use it for their cloud platform. https://www.wired.com/2015/06/google-reveals-secret-gear-con...