Hacker News new | past | comments | ask | show | jobs | submit login
National Artificial Intelligence Initiative (ai.gov)
202 points by KoftaBob on May 5, 2021 | hide | past | favorite | 102 comments



When I opened the page I was disappointed to not see a picture of someone using confusing holographic ui's to navigate some sort of graph. Scrolling down, I see my fears were unnecessary. AI == holographic UI.

The girl in Education and Training appears to have no less than 16 unique holographic objects surrounding her.

On the hand, the applications girl has a far simpler visual interface, but appears to require advanced spiderman hand gesture to properly interact with it.


Completely off-topic, though related to your observation: one thing I greatly appreciated about the movie Her is that they never give any kind of visual representation of the AIs, not even an icon or anything. I thought this was a very bold choice, especially for a main character in a visual medium, and it made her (sic!) all the more memorable for it.


They also do the scene where he plays video games with gesture controlled inputs but no sort of ui that demonstrates how to do them or how they're being received. An odd choice that stuck with me.

I think it was a very good call to not show the AI visually, because it implied a much more emotional bond. The depth of his feelings might feel a bit superficial if the AI looks like Scarlet Johansson.


Unfortunately AI has not yet advanced enough to remove the blocky gif artifacts when you zoom in on ET girl.


Well, I was about to say - the applications girl is going to click on, what exactly, parts of the brain? Then there are these images like "$" or "mobile phone" or "chat"... If I was presented that interface, I mean what am I expected to get if I click on the Amygdala - like some kind of bitcoin pay out?


That button dispenses advertisements to neuralink users.


I 'm just glad it's wordpress

It appears the japanese artist specializes in futuristic holograms: https://www.shutterstock.com/g/chombosan


Maybe their first AI system should be dedicated to generating field-appropriate marketing materials & stock images.


But then how will they acquire funding for their next AI system?


Easy! Their marketing AI would create such a perfect pitch deck to ask for funding that it would be practically guaranteed.


ok


What's so wrong about it? It's ugly, but it's just a graphic. It's not like it's going to forever taint their research.


It's not that it's ugly-- it looks fine. It's just that it conveys no information about what it's supposed to represent.


It’s a metaphor. Beautiful metaphor.


I think it's charming personally.


[*] Re-launches.

The White House has had an AI.Gov site for a few years, operated by the White House Office of Science and Technology Policy. For a short while during the transition, the AI.Gov site redirected to a subsite managed by NIST, but now it is back to having a self-managed site.

You can see the old AI.Gov site here: https://trumpwhitehouse.archives.gov/ai/

The HN entry: https://news.ycombinator.com/item?id=19446726


But now that it's associated with Biden, it's really, really wonderful!


Despite the sarcasm, I think your comment makes a really good point. People have lenses and biases that really make it difficult to objectively evaluate the actions of politicians/administrations independent of their like for the individuals in charge and other unrelated actions said politicians may have done.


It's also really hard to find just plain descriptions of what all the different federal departments are doing and what it means. It's mostly considered too boring for general news unless someone has been able to generate some drama around it. There are huge areas where there is no reporting at all. The best you'll get is industry specific news sources, but that will usually have a pro-industry bias.


What a great opportunity to evaluate politicians with the help of AI, comparing their speeches with the actual facts.


I wouldn't paint Trump's so-called presidency with the brush that makes it looked like he was a victim of lenses and biases. It's not our inherent problem he is objectively uneducated, unpresidential, racist, sexist, and just downright lazy. It shouldn't be surprising that his lack of leadership tainted nearly everything in the government, and that his few attempts at leadership actively hurt governmental policies and initiatives.


Again - I wasn't talking about how we evaluate/measure the person/politician. I was talking about how our views of the politician affect our evaluation of the things that take place under their stewardship. And a large fraction of the U.S. disagrees with you on many of those (very subjective) opinions you just expressed. No need to presume that your opinions are the absolute truth that everyone must be enlightened with.


Politicians deserve to be evaluated for their body of work and history which includes statements and actions which may cause one to either like or dislike them.


I wasn't commenting on evaluating politicians. I was commenting on evaluating things they are involved with and do.


They get involved in certain things by being who they are. And they avoid other things and other people by being themselves.


Eating breakfast isn't bad just because Hitler also ate breakfast.


Yes. And voting is the judgement


That's reducing judgement down to a binary option which doesn't quite capture the breadth of actions. No president has only good policies. No president has only bad policies. All deserve some form of criticism and praise. Some more than others, in either direction. But reducing all this down to an election is ridiculous. They represent us. The system isn't vote them in and let them do whatever they want during those 4 years and then decide if that was good or not. We still need to operate in real time.


Voting is only the part of the judgement that intersects with voters' direct choice. 99% of things are the public discourse of opinions-- judgements-- that go into influencing people when they cast their vote.


Take it Facebook, looser.


It was introduced in legislation over a year ago and became official while Trump was still president. You may be projecting a bit too much here-- I haven't seen anyone give Biden credit for this.


It's from the White House Office of Science and Technology Policy (OSTP) [0]. The audience for OSTP is cross-Federal, and the contributing agency scope for this site is wide. The Office of Technology Assessment (OTA) [1] used to do this kind of technology policy work too. OTA served US Congress, is no more, and may or may not be worthing bringing back [2].

Personally, I like seeing deep (no pun intended) technology expertise at work in US government. The Michael Lewis book "The Fifth Risk" was hagiographic that way.

[0] https://www.whitehouse.gov/ostp/ [1] https://en.wikipedia.org/wiki/Office_of_Technology_Assessmen... [2] https://www.brookings.edu/research/it-is-time-to-restore-the...


In the today's atmosphere, invention of AI would let the top few people (not percent, but people) absorb the entire wealth of the nation and have a completely loyal and merciless personal army powered by AI to cement their high status. Right now the too two persons in the US own more than the bottom 40%, but this status is shaky and depends on judicial framework run by people with their own opinions. AI would allow to remove this uncertainty.


I think this statement rings true if your replace "AI" with most if not all technologies.


The difference is that with other tech if you have enough capital you can try to compete. With AI capital is not enough, you need oceans of data which is only available to the few companies that were able to position themselves early enough at the data spigots. For example, no matter how much capital Microsoft throws at it they aren't able to get access to all the data generated by mobile.


This! I work for a company with ridiculous amount of data that goes back years. I just don't see how any new startups (even well funded ones) is gonna compete. It's not like they can go and ask users to resubmit the data.


It's entirely feasible that you don't need ridiculous amounts of data to generate an AI - that's just the approach being taken by the vast majority of research teams.


Modern machine-learning/deep-learning takes a bunch of data and uses high-dimension, more-or-less brute-force methods to approximate that data with a curve. It works good often (seldom works "great" 'cause the data can't fully capture the situation).

The appealing thing about this is the programmer doesn't have to understand anything. If you have little data, approximation just isn't going to capture the situation. Either the programmer gets an understanding of the system (extremely costs and time-consuming) or we create systems that are themselves capable of this understanding. But no one knows how to do this, all the "artificial intelligence" victories anyone has observed have come from throwing computing power at a problem. Maybe someone will figure out how to throw computing power at the general problem of understanding but I'm doubtful.


That's totally fair. I'm not a AI researcher. From what I've heard from internal folks. A new competitor might be able to compete on one or two models to create some niche, but not in the entire market. The market is big enough that the competitors can still be viable companies, but we are not resting on our laurels either, so it'll be interesting to see.


What makes you think all that data is going to be relevant? One of the points of fragility of deep learning systems is that the data they're trained on only represents the past and the conditions of the present can change.

I mean, there are a few situations that can stay pretty constant but just think when a bunch of AI system are established, all thinking the world won't change, then a cascade of causes changes and the systems no longer working becomes part of that. Of course, we already saw fragility of modern systems with Covid, so let's add even more!


Honestly, I don't know. Not a AI researcher. But for competitors to be viable, they also need to get users to give them data and the infrastructure to handle it, that's not an easy task. We actually lost a contract to handle massive (PB size data) user data a few years ago, but after a few years, our competitor lost that contract back to us because they simply can't handle the volume and the processing power.


This is hardly a problem. I can assure you there's enough public information available on the Internet you won't be able to go through it with any equipment reasonably accessible to a single person.


Your scenario would be plausible if what was being created now was true "artificial intelligence", an actual means to reach human or high performance (and with that, I'm not sure what how even a few top people could share that power - it seems a bit like Sauron's ring there).

But if we're talking present day deep-learning/machine-learning/etc, these system are not doing much than pattern recognition in an impressive, sometimes useful but never reliable way. They'll add to the capital of those on top but not be a fundamental game changer unless something changes (and their seem clearer despite more and more processing power being thrown at them).


Agreed, but it would be kind of cool to have an Apollo-style program for real AGI. I would feel pretty comfortable placing a bet that with (essentially) unlimited funding and the resources of the US govt, a 2030 goal for real deal AGI is possible. Like, if it’s possible at all with anything remotely within our current technology set, we’ll know by 2030.

Too bad we (as a country and somewhat as a species) don’t really do ‘big’ things anymore.


I like the idea but I have describe how hard I'd see being.

With things like either the flight to the moon or the atomic bomb, the basic principles involved were established and the engineering process to accomplish the task needed to be worked out (which isn't to deny these were huge, difficult projects).

Unlike moon shots and Manhattan Projects, large bureaucracies have poor record for automating discovery. Add to that, that nearly everyone is lost on the question of how to get to AGI (if it's not by brute-force training). And those few people who have an opinion on the subject don't necessarily agree with each other and speak in generalities that can't really be implemented. Having 1000 or 10,000 people brain storming seems as plausible as anything but the problem keeping such pure research at all motivated and directed is itself very hard.

The process actually is kind of like ... a university. In fact, universities are already where creative research tends to happen. But our universities, universities across the world, have evolved to keep researchers on a shorter and shorter leash. To get this unlimited speculation, you'd have to undo this - and lots people have an interest in keeping things this way.

And it's worth considering that wingnuts and crazy people are very interested in AGI along with very smart people - sometimes you get someone who's all three things, even.

So managing 1000 people trying to get to AGI would a serious challenge no matter how you cut it. Plus, the results of AGI might not be what anyone but it's creator(s) want, if that. So there are some hurdles.


I'm not sure that even unlimited funding would get us there anytime soon.

Human mental health is a very difficult & intractable problem to solve in part because we have a very poor understanding of the closest thing we currently have to AGI-- ourselves. I wouldn't expect the human race to be capable of producing AGI that isn't batshit insane before we fill in some gaps of knowledge there first.


Nowadays AI can recognize speech and search for data. Someone should implement an AI application to compare political discourses with actual facts and show the inconsistences in realtime. That's a way AI would make the world better.


Ah yes, the Ministry of Truth, what a great idea. Who decides that truth? You or me? Because I'll change my thoughts on if it is a great idea depending which you decide.

The problem here is that frequently there isn't an objective truth. There's often a substantial amount of nuance involved too (something ML can't really process yet). You'll notice that fact checkers often limit what they fact check to subjects that are well documented and not contentious in the realm of experts in the field. Unfortunately many problems don't fit under this umbrella.

Of course, you could do this in a limited sense, but there's a can of worms that comes with it. Unfortunately we don't know how big that can is, just that if we open it that there's worms inside. So we should be cautious.


That presumes that there is a completely unbiased person or group that knows what the objective truth is.

What you can do is measure to some degree which direction certain publications or articles are leaning, such as left/right or establishment/anti-establishment. And whether they are more provocative, or more nuanced. That way you can be aware of biases and try to integrate information coming from different perspectives.

https://www.improvethenews.org/


We have a person like that in the US. In fact we have about 328,000,000 people that are fairly sure they are that sort of person. (well, we can probably subtract out the population under 2 years old, that's about the age my kids started strongly expressing their views on how things worked.)


There's no AI yet. But if it's invented, it will be promptly used to create autonomous quadracopter style drones that patrol and police their territory and recharge themselves when necessary. It'll be used to make an "agent smith" type of surveillance bot that listens to people talking at their homes (over the alexa and such mics) and intelligently find dissenters with an extreme accuracy. That's something every dictator would pay half his empire for.


That would require a central repository of "actual facts". Even if there was one and everyone agreed on every entry, there would still be the issue that a vast majority of political speech is opinionated interpretation of a selected group of facts. Fool-proof automatic fact checking might be nice, but it wouldn't solve the problems inherent in the current state of political discourse.


I really don't know what you mean here. There's no mention of AI being used to replace the judicial system. How do you see the judicial system being subverted by AI?


I am hoping in the near term future we can have serious conversations about the usage of AI and where it stands legally, because from what I see in 2021 is an essential freeforall (clearview AI being one example). Can anyone link to me a comprehensive set of rules/laws governing AI, at all, in any country on earth?

I assume it's now legal to collect whatever data you like, as much of it as you wish, and even if it leaks online (as the recent FB data breach did), you're "a-okay". Cool, so now I know PII is a fair game to collect, sell, profile, and use as I please, any actual real-world human impact from this (deaths, scams, anything in between) is not my problem. I know for a fact I won't be prosecuted, fined or challenged in any way (provided I'm Mark Zuckerberg and appease my overlords), anyone insignificant will get the book thrown at them as is typical.

We seem to have a real issue with getting to grips with this new technology and how it can easily be used in both good ways to enhance industries but also work against humans (racial bias/profiling for example).

I honestly am amazed "AI" is even legal today, it should be explicitly banned entirely until we decide on "ground rules" and slowly but surely open it up. Any usage of ANYTHING considered "AI" should go before a court - an "AI supreme court" if you will, and a license issued for exclusively that usage (and it would be monitored and its inputs/outputs tested, all software code/algorithms would be available to the court for review which would consist of highly renowned developers and those who understand how the tech works).

But that's clearly not how America works - even if there's real human harm and suffering, the rules/laws only come later, after the bloodshed, far too late. It'll sadly be the same here.


I think your model for how laws get written is wrong.

Among civilians, AI is more like we are in the 1930s with airplane clubs--advancing technology out of the love of the idea. Ethical questions are important, but the harms do not seem significant enough to regulate before experimenting.

Among the military, it's more like we are in the 1950s with the US/USSR nuclear arms race. The (real or perceived) need to drive AI technology without regard to ethics whatsoever comes from an analysis of the opponent which says they are not regarding ethics.

There are no laws for civilians regarding AI. There are no treaties for countries regarding AI. And there won't be until after someone gets hurt.

This is the normal model for laws.


There are already export regulations on AI in the US[1], Europe is proposing some heavy restrictions[2], and there have been local bans on some uses already[3][4].

[1]: https://www.theverge.com/2020/1/5/21050508/us-export-ban-ai-...

[2]: https://www.wired.com/story/europes-proposed-limits-ai-globa...

[3]: https://www.wired.com/story/portlands-face-recognition-ban-t...

[4]: https://www.nytimes.com/2019/05/14/us/facial-recognition-ban...


The question is does this model work for potential civilization-ending technologies like AI, nano, geo-engineering or gene-editing? Should research that has a very-very low (but non-zero) chance of ending us all be regulated? Would that regulation even lower the chance?


There is a terminology problem.

When you say AI, it sounds like you are mostly referring to automated data collection. This has one set of concerns.

AI is also commonly used to mean algorithmic decision making. Again, a different set of concerns. The mechanism behind the decision is not really important, but people have concerns about e.g. fairness of automatically deciding based on a set of factors or on past trends. There are also concerns about the reliability of opaque AI models, because stakeholders dont understand how they work.

There is also AI as sold by consulting firms which is usually just some variation on a BI dashboard, dressed up in modern lingo.

And then I suppose there are some future concerns about actual AGI, that are more about having created some sentient thing, but those are more philosophical at this point.

Normally I don't like getting hung up on terminology, but with AI, there are so many different meanings that it is best to use some other term, especially when we talk about what is regulated. It would be silly e.g. to be perceived as regulating linear regression when you are concerned about data collection.


> And then I suppose there are some future concerns about actual AGI, that are more about having created some sentient thing, but those are more philosophical at this point.

How would we know? We can't even explain our own sentience.


I totally agree with the assertion that there appear to be no rules against data collection, or consequences for leaks.

However, with respect to forbidding AI - the first thing this comment calls to mind is the Butlerian Jihad [1]. I am not optimistic.

It is a dream to me to believe that AI use could be restricted by national laws, although the EU is trying. Your subcontractor in Country X is going to receive the clickstream and do "something" for you. I am having a hard time seeing how "train AI with the clickstream" is going to be forbidden (or that such use could ever be _proven_).

I think the discussion has to be more fundamental than this. Even on HN I don't see a lot of coherent alternative suggestions.

The genie is out of the bottle.

Now what could we actually do about it, in a realistic productive way?

[1] https://en.wikipedia.org/wiki/Dune_(franchise)#The_Butlerian...


If you're on twitter I recommend following Ryan Calo as he's a good gateway to the community discussing these issues.


You might find the European Commission's recent legal framework proposal interesting: https://digital-strategy.ec.europa.eu/en/policies/regulatory...


The term AI is fairly imprecise, and might include game playing AIs like Deep Blue and Alpha Go, or research focused systems like Fermi Net and Alpha Fold. I don't think those applications are in need of regulation. What really needs to be regulated is:

1) Mass tracking and collection of personal data.

2) AI being used to make important decisions that affect people's lives.

3) Autonomous weapons. (This is a subcategory of 2, technically, but has its own unique set of problems.)


In the U.S. at least, I think the FTC is already looking to overreach in this department. See FTC vs LabMD.

You mean like a FISA court but for AI use? Look at how even FISA courts turned/are turning out. In a perfect world I would like to see the 3 laws of robotics to take precedent.

I overheard someone at DefCon one year say that it isn't companies we should be afraid of but the guy that is on his 5th Red Bull of the night and is about to flick the switch to on.


FDA is heavily regulating medical AI products and there is an extensive legal framework around them. See 501k


A government with that power would be far worse than any AI could be


AI is just statistics. Are we really going to make math illegal?


This is a fair point.

1. There is already a (terrible) precedent for making math illegal with encryption

2. What people really want to regulate is the use, and (my observation / paraphrase) it's really about how data is collected and aggregated, and what kind of data is used as the basis for automated decisions. The challenge is that this all gets bundled as "AI" and pushed by people that don't understand the underlying science, so there is a real danger I believe of accidentally outlawing some math, simply due to ignorance.

I think part of the solution is calling out nontechnical "AI ethics/fairness/bias/trust etc" people that don't know what they are talking about but are pushing policy in response to their own misunderstanding of what modern "AI" is.


> AI is just statistics.

Incorrect. There was AI before statistical machine learning.

> Are we really going to make [AI] illegal?

No. The question above is about the uses of AI, not the least of which includes protected rights at a national level as well as international treaties regarding warfare and human rights.


Give me a break!


Is it common in the US to have this kind of "national initiative"?

In Canada, we have various "National {x} strategies" but my perception is that they rarely accomplish anything and are more as favors to Canada's elites and for political gain.

I had always associated this kind of thing more with our kind of political system. How is it percieved in the US?


curious you should mention it (from Canada) !

"Since Canada published the world’s first national AI strategy in 2017, more than 30 other countries and regions have published similar documents as of December 2020."

"The launch of the Global Partnership on AI (GPAI) and Organisation for Economic Co-operation and Development (OECD) AI Policy Observatory and Network of Experts on AI in 2020 promoted intergovernmental efforts to work together to support the development of AI for all."

source: AI Index Report 2021 from Stanford HAI Lab

[0] https://oecd.ai/wonk/oecd-and-g7-artificial-intelligence-ini...


I'm not sure if there's a real strategy here or rather it's that one of the top researchers in the field, Turing Award recipient Yoshua Bengio made the explicit decision to stay in Montreal [0]. If the name sounds familiar, he's Samy Bengio's brother.

[0] https://mila.quebec/en/person/bengio-yoshua/


Samy Bengio is Yoshua Bengio's brother, not the other way around.


Brotherhood is not commutative!?


In the USA, many technologies come out of government research labs, but almost exclusively from the defense space. As far as I know this type of thing hasn’t really produced anything of note, but sometimes it does affect policy.


It all depends on the amount of funding and top-level support.

Many technologies get created within DoD and rest of US government, with most not achieving widespread adoption (I'm using that as criteria for being of note).

Some do, such as the Library of Congress classification system that replaced the Dewey decimal system in many libraries (https://www.loc.gov/catdir/cpso/lcco/) . Also, the internet, which grew out of Arpanet project (https://www.internetsociety.org/internet/history-internet/br...)


A good thing, I suppose. I read through the first linked paper and there was so much boilerplate. Someone needs to train a transformer model to remove boilerplate for casual reading.

I have to love the domain AI.gov, I wish I owned something similar. I do have M-x-AI.com, but that is really only meaningful to Emacs users, right over the head of vim users.


Do you mean this OSTP report on cloud computing? Personally I think it’s pretty straight to the point for a government report like this. There’s like half a page introduction and it’s straight to specific recommendations and goals. They even put the core idea for each point in bold...

[pdf]: https://www.nitrd.gov/pubs/Recommendations-Cloud-AI-RD-Nov20...


That was the paper. I agree with you that for government reports, it is not too verbose, I was just wishing that someone else would do the work of training a model to summarize government reports.


A lot of US government boilerplate is to document authority, responsibility and jurisdiction- basically it's "Why we're qualified to spend the taxpayer's money doing this, who told us to do it, and why it's necessary".


I love the concept. The White House has been lacking any form of intelligence for quite some time, so this is a step in the right direction.


This initiative was started under a previous administration.

https://trumpwhitehouse.archives.gov/ai/


> The White House has been lacking any form of intelligence

This is exaggerated.

What do you mean?


It sounds like they’re making a joke about the intelligence of past presidents.


No need to limit yourself to the past.


++


And it's still devoid of it. Nothing to see here, other than performing pedantic prose.


Is anyone else kind of freaked out at how "AI" is treated? Like, AI includes everything from handwritten algorithms to data dashboards to neural networks — and yet the concept seemingly assumes that, if people are involved in the information process, the AI isn't complete. I somehow find an AI focus deceptive and somehow unhealthy. Am I alone in this reaction?


I agree. You can usually replace "AI" with "software" and not lose anything. In this case, when you do that, I think it's a great idea that the US has a national software initiative to advance research in these areas, create more education and training, help industries use software, and better understand the risks of things like untrustworthy software.


You can usually replace "AI" with "software" and not lose anything.

I would rather replace "AI" with "software written by software". For someone in technology, the better suggests the level of indirection involved, and the potential for harder-to-detect bugs and corner cases.


ime, "AI" rarely means "software written by software" and more often means "we use scikit-learn to do some small step in our system".


AI is only very rarely software written by software.


They can call it the Fifth Generation Computer Systems initiative.


I’m wondering if the “justified font” selection was chosen by a human or AI…


“Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States.”

– President Donald J. Trump


They need a better name. This is a blue sky research initiative, and any AI is going to be based on some equivalent of a neural net.

I propose they rename themselves to something like "BlueNet", or maybe "SkyNet".


Can we work on getting real intelligence into Congress first?


If I could upvote this a Milllion times I would.


Big "Back to Top" button is big


Commentary about minutiae of the layout is uninteresting in my POV and per HN guidelines. Please help keep HN interesting.


And it's on the left. I kind of like it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: