Hacker News new | past | comments | ask | show | jobs | submit | curry-castaway's comments login

> They're a consultancy which evaluates computer security for insurance companies, before they insure a business.

Can you share the name of the company?


Google's related post, "Exploring Neural Networks with Activation Atlases", has some more detail and image samples: https://ai.googleblog.com/2019/03/exploring-neural-networks....

Interesting example of the activation atlas in action:

> These atlases not only reveal nuanced visual abstractions within a model, but they can also reveal high-level misunderstandings. For example, by looking at an activation atlas for a "great white shark" we water and triangular fins (as expected) but we also see something that looks like a baseball. This hints at a shortcut taken by this research model where it conflates the red baseball stitching with the open mouth of a great white shark.

> We can test this by using a patch of an image of a baseball to switch the model's classification of a particular image from "grey whale" to "great white shark".


I can second this recommendation – Oclif[0] is an absolute pleasure to use. I'd even encourage developers who aren't at a "full-stack Node.js shop" to give it a shot.

I'm particularly impressed with the bash and zsh autocomplete support. I was delighted when I finished setting it up and everything worked perfectly on the first try.

[0]: https://oclif.io/


Direct link to the study – "Electrophysiological and transcriptomic correlates of neuropathic pain in human dorsal root ganglion neurons": https://academic.oup.com/brain/article-lookup/doi/10.1093/br...


> One of the direct dependencies is a library that renders a loading spinner in command line interfaces, which itself pulls in over 20 transitive dependencies.

Just trying to understand, is this a bad thing?

Someone else made an open source CLI spinner library which also uses other people's existing open source libraries. This saves a lot of time and gives developers many good options.

Should Pika write and maintain its own custom CLI spinner animations? Are you saying the CLI spinners should be standardized in the next version of ECMAScript itself?

How is this worse than the same thing written in Python, for example? (I mainly use javascript, so maybe I haven't been exposed to the kinds of alternatives you're thinking about.)


It’s bad because this leads to a standard React project having over 2,000 dependencies.

The real question is: do you -really- need an external lib with 20 dependencies just to show a freakin’ loading spinner? Remaking the wheel is bad but so is never making truly simple things yourself, or just not using them.

What happens when a common package breaks? What happens if it gets hijacked and becomes a security vector that’s impossible to spot because it’s loaded as the 567th package in a dependency tree?

The answer here is to have a strong stdlib where do you don’t need to pull in 3rd party packages all the time for trivial things, and not including a million small packages in every single project.


> The real question is: do you -really- need an external lib with 20 dependencies just to show a freakin’ loading spinner? Remaking the wheel is bad but so is never making truly simple things yourself, or just not using them.

So the problem is the sheer number of dependencies? What is a reasonable upper limit?

Yes, javascript should continue to standardize commonly used features, but avoiding dependencies doesn't seem to be a solution.

If anything, more dependencies are a good sign because they imply that other people have spent more time and effort on a solution than anything you'll be able to hand-roll for single-use.

It sounds like the root issue here is just dependency management. If our package managers were solving this issue well enough, there should be no practical difference between 2 big dependencies with significant functionality (and more code to review) or 20 tiny, easy-to-review dependencies.


From a security perspective, minimising dependencies is preferred. I have to review at least monthly all our dependencies for published vulnerabilities and new versions.

We don’t allow automatic upgrading of packages/dependencies due to the risk of malicious code making it in (see https://www.npmjs.com/advisories for examples). Yeah there are companies that will help manage your vulnerability process but it’s still a lot of overhead and only grows as the number of dependencies grows.

There’s also the whole left-pad mess from a few years back which shows you always need local archived copies of any dependencies you use.


> We don’t allow automatic upgrading of packages/dependencies due to the risk of malicious code

That's a good idea, how do you do that?

> you always need local archived copies of any dependencies you use.

Are you committing your dependencies? Or using a package manager with caching?


>I have to review at least monthly all our dependencies for published vulnerabilities and new versions.

Off-topic, but can you write about how you manage this without tons of manual work?


If you care about security, you should evaluate your dependencies.

This preferably means reading the code you're pulling in but it's unrealistic that we're going to read 2000 constantly changing dependencies for every deploy, so you need to establish trust some how.

Reputation of maintainers, dep CVE scanning, SAST and protective monitoring can all add additional assurance, but they won't protect you from a random hijacked npm module, and the more you include and from more people, the more likely you'll be affected by a zero day.

Having an entire community depend on tiny libraries that do nearly nothing exaperates this problem, and if you use something that almost nobody else does, you're unlikely to be saved by npm audit.

I don't use node daily driver, but I assume npm audit growing to include reputations and frameworks owning more of the dependency tree will help, but the users of the system also need to be considerate of the risks they take and the trust they place.


I used to have all npm modules in source control (SCM), until npm introduced tree shaking. I'm using ZFS which have both de-duplication and compression, so I gain very little by tree shaking. I wish there was a way to disable tree shaking in npm, it's really the source of all evil. Anyway, I reviewed all code diffs after each npm update, very little changed, eg. it wasn't that much work. But it's now impossible as npm moves files around. I also deleted a lot of unnecessary files. About 90% of the content of each NPM packages are not needed.

Another reason why I stopped hosting dependencies in SCM is native modules. I wish Node.JS itself could become a platform layer so that I wouln't have to use these native modules.

Another thing is compile to JS, where a tiny change in the source might cause a huge diff in the generated JS.


I committed my dependencies until recently too, and I've been trying to figure out a better alternative. Are you doing something else now rather than committing dependencies? (Right now I'm just using a hodgepodge of npm scripts to lock things down.)


There is an enormous difference between 2 large packages and 20 small ones. I can see the main authors more clearly, and not have to worry about 20 different packages being compromised. Your statement makes no sense. More packages does not, in any way, shape, or form, correlate to “better quality”. The sole thing it shows is that the author pulled in more packages. That’s it. Whether it’s good or bad or secure or not secure is only determined by analysis. The author could easily have been super lazy and pulled in 18 small packages instead of writing small helper functions manually, etc. There is no defensible argument for more packages and bloat. None. As I said above, the core lib should focus on providing functionality so pulling in a fancy spinner library means you pull in ONE library, not 20.


It’s fine from a development perspective where the job is to make something work.

It’s horrifying from an operations perspective where the job is to make sure everything works.

Developers can afford to ignore looking into dependencies, operations need to make sure every dependency is functional and safe.

If you write a piece of C# using the standard .Net library you can be fairly sure it’s safe and sound. If you write something using 2000 JS packages, you have to read through every one of them to be sure.


I disagree that it's the job of a random ops person to ensure that a developers dependencies are sane.

Putting the onus on the developer to do a good job with regards to secure development practices is an essential part of a wider system.


Oh I agree with that completely, but it’s very easy for developers to get away with shitty practices in a lot of shops.


Agree with you but this goes the other way too - Ops not allowing security patching or upgrading of systems because "they work".


I always have trouble using PGP to encrypt and send files – I always have to refresh on the CLI usage, and whoever I'm sending to usually does too. I think this is a really cool tool for sharing encrypted files with people who are less comfortable with PGP.


> one of my friends has a mega-sized server than he just runs builds and trains ML models on

I built my first desktop computer to do this recently. I can definitely recommend it, even for local server development. It's nice to be able to have a version of a codebase running on the remote machine so my local laptop stays cool while I'm testing.


Do any of these options have a case that feels sturdy? It seems a little silly describing it this way, but one of the reasons I keep coming back to macbooks is the aluminum case. I want good specs, but my lizard brain also wants a laptop which doesn't feel like a flimsy plastic toy.


Dell XPS 13 Developer Edition is a good laptop.


Ah, fair enough. I think Purism cases are all-metal. I have not touched one though -- going to once they release a 32GB model.


System76's Oryx Pro has an aluminum case.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: