Hacker News new | past | comments | ask | show | jobs | submit | more ivirshup's comments login

Most adobe applications have quite extensive APIs. I think you can even do some linear algebra via Illustrator.


I mean you’re not allowed to run it on a server.


I type it.

It’s typically short short, and it’s a very destructive action. It’s also very apparent to me why I would want to type it out.

That said, someone is going to make a mistake at some point. I’d expect better from github support here.


This topic really got me thinking.

A lot of us navigate by typing, be it CLI, in our IDE, URLs etc. And we must have got used to type hundreds of times our company or project names every day, in many places.

I can't imagine typing <project name>/<project name> in a input box triggering in us a lot consciousness or extra care. Typing something more unusual could actually be more impacting I think.


Logging in does not completley solve this with google. A number of pages languages are still IP based.


You can do both. Check out the difference between `import` and `using`: https://docs.julialang.org/en/v1/manual/modules/


I think you mean you can do `foo(x, y, z)` instead of `M.foo(x, y, z)`.

You cannot do `x.foo(y, z)` instead of `foo(x, y, z)`, a feature I sorely miss in Julia. Nim is awesome in this regard. There are some functions / paradigms that are just better represented by `x.foo(y, z)`. Chaining functions in particular in typescript / javascript / rust is SOOO nice. Specifically, in Julia a lot of times I want to create a immutable struct with multiple fields, many of which have defaults. In Julia none of ways seem clean to accomplish this. In rust, something like this is very common:

  let m = App::new("My Program")
    .author("Me, me@mail.com")
    .version("1.0.2")
    .about("Explains in brief what the program does")
    .arg(
        Arg::new("in_file").index(1)
    )
  
In Julia the same thing is:

   m = App("My program")
   m = author(m, "Me, me@mail.com")
   m = version(m, "1.0.2")
   m = about(m, "Explains in brief what the program does")
   m = arg(m, index(Arg("in_file"), 1))
I think with improvements to the pipe operator this can be better but right now I find this painful to read / write.


> immutable struct with multiple fields, many of which have defaults

With `Base.@kwdef` (or the enhanced `@with_kw` from Parameters.jl) on the struct definition, that example can be:

      m = App(name = "My program", 
              author = "Me, me@mail.com",
              version = "1.0.2",
              about = "Explains in brief what the program does",
              arg = index(Arg("in_file"), 1)

which seems pretty nice to me.

Also, there's Chain.jl for a better pipe operator, though I agree improvements to the Base pipe would be great.

To the original point, the problem with x.foo(y, z) is that it would be highly misleading in that it seems to suggest that x is somehow special to the dispatch of the call, which isn't true. While superficially it can feel more familiar, it would instead become a trap for people giving them a wrong mental model of the call's semantics.


That uses keyword arguments though which is bad for performance, which is the main reason I'd like to use immutable structs.


Unless you have some very weird use case (and probably even then), that sounds like a bad application of the "keyword arguments bad for performance" mental model.

The reason to use immutable structs is better performance when you use them, not when you construct them - how often are you constructing new struct objects that the constructor performance is a significant issue? And if you are constructing many many thousands of objects to the level it becomes a performance concern, any difference from keyword arguments will be easily dwarfed by the cost of actually constructing the struct in memory.

Also, these constructors ar simply passing on these named arguments to the non-keyword constructors. Anything complex you're doing during construction will be happening in the non-keyword constructors, and those do get specialized to have good performance; these keyword constructors are simply providing a small, simple interface to them, in a way that shouldn't affect performance in any measurable way.


I have never noticed them being bad for performance. Are you really sure?


Since keyword arguments don't participate in multiple dispatch, they can't be used for compile-time specialization of methods based on their type. That can be a performance issue in specific scenarios, but I think they've taken that as a general rule and misapplied it here (as I mentioned in my sibling comment). For this scenario, that aspect of keyword arguments is most likely irrelevant.


The RFID tags are on a sticker, you can see it on the back when you take it off.


Pre-commit has it's own CI that does this for you.

But if you're in a private repository, you can just have a job that runs pre-commit. The versions of the pre-commit tests are hard-coded into the config.

There's also a bot that keeps your pre-commit config up to date with the newest released versions of the checks.


The guy who coined a new term every time he had a new product to sell?


Don’t be too scared by all the TODO’s in the docs, most of that stuff actually works – the docs just haven’t been written yet. Admittedly those docs have had TODOs on them for over a year.

To see how it works, you’ve got to look at the repo or any of the talks. The developers are also quite active on the tracker, so do just ask if something is unclear.


It’s a ragged array that you can work with in python code, but also Jax and numba code.


"just create a new instruction set"?

It's taken the better part of a decade for workable alternatives to CUDA to emerge.


I hope you’re not calling ROCm a workable alternative…

it’s really not close. I love that they are trying, I just honestly feel they need to be trying with 10 times as much funding/resources behind them.

Or do you mean Apple’s Metal Compute? useful only with inconvenient (at scale) Apple hardware.

If workable CUDA alternatives were really here, I feel people would be talking about them more given how large the hate is for Nvidia on multi fronts from multiple different user groups.


I was, but mostly through XLA or Vulkan layers. Also, I’d define workable as a pretty low bar.


That’s fair, for me it never worked with any of the hardware or software I wanted it to and I’m not enough of a GPU expert to really fix that myself, so for me and many others I found with similar problems it definitely has yet to become a workable solution, but I was probably a bit harsh if I came across like I thought it didn’t work at all, I’ve seen it demoed and I know it works, but for such a restrictive set of hardware and software that Ive never seen it in use myself outside of demonstrations.

As for Vulkan, I tend to think of CUDA at the driver level given how closely coupled it is to Nvidia’s hardware. So I wasn’t really thinking about broader cross compatible APIs like Vulkan which tend to get implemented on top of the drivers providing low level access like CUDA gives… but with the exception of Apple (because we know they will never change their mind) I’m hoping the broader industry gets behind Vulkan compute shaders enough we finally get something that delivers on the promises made when the OpenCL effort began. I want clean understandable abstractions over the top of all the different SIMD and MIMD capabilities we have these days it doesn’t have to magically compile my code down to FPGA hardware but how about actually being able to get the best matrix multiplication performance out of both my CPU or GPU or both if I have a CPU with the right kind of integrated GPU that they can efficiently enough share memory and both work together without slowing each other down when working on an embarrassingly parallel matrix multiplication task. Is this really too much to ask of software in an an era that gives us near magical JIT performance in multiple languages, and manages to build cross compile/transpilation tools that can convert entire assembly code bases into JavaScript and all the other nice things we have as software developers… it never felt like too much to me but the fate of OpenCL appears to disagree.


I never understood the dominance of CUDA. I did GPGPU more than half a decade ago, and I feel like stuff has hardly changed. There are thread groups, group shared variables, global, and group shared atomics etc. Underneath, all GPUs run the same-ish wide SIMD architecture, so I'm pretty sure most stuff is semi-performance portable. DirectCompute, OpenCL and CUDA and probably all the others expose this same programming model. Why is CUDA so dominant?


> Why is CUDA so dominant?

nvidia decided they wanted to be the market leader, and spent $$$$ developing tools, training materials, and libraries (like cuDNN) which they gave away for free.

Other GPU vendors chose not to outspend them.


They also hire software engineers to integrate CUDA into open source projects. On the one hand, those projects can now take advantage of advances in hardware. On the other, users will have to buy NVIDIA to use it, so how open is it? Also NVIDIA will not provide any help with CI, so those costs are borne by the project – or the code isn't tested.


Which alternative? When it comes to ML there is no alternative to CUDA.


JAX/ XLA seem quite popular, especially since you can use them in Colab on TPUs. IIRC, there are people who have managed get jax compiled with ROCm support.

Tensorflow lets you deploy to quite a few backends.

oneAPI support in GPUArrays.jl seems to be coming along.


By that logic there will never again be another workable alternative. Why not start now rather than wait a few decades?


Innovation is not an on-off switch. Market forces create an incentive to attempt to innovate, but they don't magically make innovation happen.


There's nothing that prevents you from doing that as it stands.


It's likely that there are many new designs being spun up everyday by various people, but that doesn't mean that we want to remove competition from the market.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: