Hacker News new | past | comments | ask | show | jobs | submit login
Scallop: A Neurosymbolic Programming Language (scallop-lang.github.io)
159 points by hasheddan on April 19, 2022 | hide | past | favorite | 27 comments



The idea that you had to instrumentally combine perceptual and symbolic reasoning has been around since at least the 1970s and has been implemented variously (see, for example: Hermina J.M. Tabachneck-Schijf, Anthony M. Leonardo, Herbert A. Simon (1997) CaMeRa: A Computational Model of Multiple Representations). What those models didn't do is learn, but just pouring learning into that sort of model and hoping it will magically become a GAI seems like an implementation idea without a theory.


Did I miss where the developers are claiming this toolset will help implement GAI?


The sum-of-NIST-digits CNN+logic example was really interesting: https://scallop-lang.github.io/tutorial.html#section-10

Anybody familiar with the space have pointers to good introductory material on this field? Or terms to search on to get at current work - differentiable logic?


The recent survey on Neurosymbolic Programming is a great introduction to the field: https://twitter.com/swarat/status/1474105098401943555 (pardon the Twitter link). I don't think the full version is publicly available, but I suspect you might be able to get access to it if you DM someone involved.

Other than that, here are some names of people working in this field that you might be able to find representative work from:

- Mayur Naik, UPenn (whose group I think were behind Scallop?) - Swarat Chaudhuri (and maybe also Isil Dillig), UT Austin - Luc de Raedt, KU Leuven (whose earlier work on Statistical Relational AI is very similar to the Scallop style of neurosymbolic work - I think there's a good textbook available on this) - Guy van den Broeck, UCLA - Armando Solar-Lezama, MIT

If you're more interested in the logic side of things then maybe some of MIRI's (https://intelligence.org/) older work may be of interest.

There's also a lot of people who are interested in neuro-symbolic stuff in the wider sense. You can find out more here: http://www.neurosymbolic.org/index.html. If you sign up for the mailing list, there are monthly (or bi-monthly? I can't remember) talks that are open to the public. You can even find recordings of past talks on Armando Solar-Lezama's youtube channel.

Hope this helps!


Awesome, thanks!

I think I found a full preprint on the authors site (pdf) https://www.cs.utexas.edu/~swarat/pubs/PGL-049-Plain.pdf


At a glance, this looks like a new syntax in the Prolog family (with some other special stuff). Erlang, which kept Prolog syntax but lost its semantics, has a similar new syntax in Elixir.


Datalog.

> in the Prolog family

> Scallop is a full-fledged logic programming language based on Datalog [...] a logic rule-based query language for relational databases. [...] Scallop is a scalable Datalog solver equipped with support for discrete, probabilistic, and differentiable modes of reasoning


Yes, but Datalog has traditionally kept the Prolog syntax as a subset of Prolog. This is a new syntax.


What's the applicability? What are some concrete examples of problems that you would solve with it? Why is a better fit than using you preferred language with your own code?


Their NeurIps paper is all about scalable training/inference in ML systems that combine deep learning for perception and symbolic logic for e.g. visual question answering

pdf: https://www.cis.upenn.edu/~mhnaik/papers/neurips21.pdf


The "CLEVR" example reminds of Terry Winograd's SHRDLU program https://hci.stanford.edu/winograd/shrdlu/ which was written in 1968-1970.


Can some one share some links explaining this field?

https://www.youtube.com/watch?v=HhymId8dr5Q

This seems to be literally the future. What am I missing?


The video links to an overview of the backing research program, https://medium.com/swlh/neurosymbolic-ai-to-give-us-machines... , and from there I would look at the MIT site: https://mitibmwatsonailab.mit.edu/category/neuro-symbolic-ai...

The underlying big idea, as quoted from the medium article:

They posit that humans are born with a pre-programmed rough understanding of the world, in some ways analogous to the game engines used to build interactive immersive video games. This “game engine in the head” provides the ability to simulate the world and our interactions with it, and serves as the target of perception and the world model that guides our planning.

Crucially, this game engine learns from data, starting in infancy, to be able to model the actual situations — the endless range of “games” — we find ourselves in. It is approximate yet gets more and more efficient — to the point that very quickly, humans make instant mental approximations that are good enough to thrive in the world. And, the researchers think, it’s possible to replicate this type of system in a machine by embedding ideas and tools from game engine design inside frameworks for neurosymbolic AI and probabilistic modeling and inference known as probabilistic programs.


it looks really good, the sort of trully new generation of programming languages we need, bringing new constructs and solutions for the new world challenges and solutions as the previous abstractions are already falling short.

I wish it would be clear on which platforms does it run, because for its description I can only read about python's integration, which is great, but not sure whether I could run a "client" on javascript devices or C embedded and communicate to a Scallop backend that speaks python.

I hope to read more about it the following months

edit: typo


Download page has platforms: https://scallop-lang.github.io/download.html

Seems to be MacOS (M1 and x86) and Linux x86.


that's true although that's not a target platform for which it compiles to. Later today I've seen the runtime appears to be written in rust, so it might be possible to port it to many architectures althought at the moment it looks to be a desktop only runtime/embedded library.


The new generation of programming languages will just be plain old English text


How does this compare to NeuraLogic?

https://github.com/LukasZahradnik/PyNeuraLogic


I have been a few months into PyNeuraLogic because of its focus on Relational Deep Learning (in particular Lifted Relational Neural Networks). I would say the main selling point is extending GNNs (Graph Neural Networks)

From what I can see on the Scallop website, the focus is on vision and NLP.

So both are similar approaches combining deep learning with symbolic reasoning (and both are based in Datalog) but the problems they are tackling are quite different. Also, both approaches have made it to top conferences like NeurIPS and ICLR, so I guess this field is gaining momentum.


Just based on the docs, on the lowest level they both create a mapping from input features to relations. Then NeuraLogic has an example for implementing GCN like message passing with a simple relational rule:

``` node2(X) <= W node1(Y), edge(X, Y) ```

Do you know, is it possible to implement something on Scallop as well, or are the differences much larger?


The GCN implementation is based on the concept of templating (or lifting) which is not exploited in Scallop. In fact, this is the key idea of Neuralogic for joining deep learning and relational-logic representations.

It is explained in the paper "Beyond Graph Neural Networks with Lifted Relational Neural Networks" (https://arxiv.org/abs/2007.06286) and you also have a series of blog posts at https://medium.com/@sir.gustav


Looks quite interesting. Anyone here who has used it in practice?


It only just dropped, so no one has used it in anger yet. I've written a couple of small things in it today though.

It just seems like such a lovely little language, but prologs are. It is pretty damn great to be able to attach probabilities to things and prolog away.


Someone posted another link to Scallop on HN a few days ago. I am looking forward to the next version that will have interop with Python and PyTorch.


I saw it on Sunday, I think this is second chance pool in action

There’s a PyTorch example here: https://scallop-lang.github.io/tutorial.html#section-10


Is the source code available? When I go to the the GH I see that the project is archived. Am I missing something here?


The idea is cool. But - which existing language is not neurosymbolic?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: