Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Sold its Soul for $1B (onezero.medium.com)
303 points by andreyk on Sept 4, 2021 | hide | past | favorite | 86 comments



In related news, OpenAI Gym is one of the best collections of reinforcement learning environments available, and after years of neglect OpenAI has finally given a non-paid contributor permission to merge fixes and prepare new releases. So the library will continue under their name despite receive no support from the actual company whose name it bears.

https://github.com/openai/gym/issues/2259

For a pittance OpenAI could maintain OpenAI Gym, Spinning Up, and their other early projects that have been so helpful to students and amateurs like myself. They choose not to.


You say they've finally given permission for someone to merge fixes. But it's MIT licensed, so why wasn't it just cloned and continued under another name?


> But it's MIT licensed, so why wasn't it just cloned and continued under another name?

Contributing a patch is a one-time operation.

Maintaining a fork and keeping it up to date means you have to constantly integrate upstream work into your work if you want it to remain relevant.

It's the difference between fixing a single issue and owning a project indefinitely.


Sounds like OpenAI itself cannot be bothered to maintain the original project.

A well meaning forker (forkee?) interested in RL looking to make a name for themselves could do the world a huge amount of good.


I would fork it and mainly provide time to arbitrate changes. I would argue that it is difficult to make sense of the community desires though. Looking at the issues, the path desired by the community for the project isn't clear. If you mind clarifying, I wouldn't mind helping out.


Probably has been but I (and others) just don't realize it.


Because people have forgotten that forking is the response to not liking something in open source, not trying a hostile takeover of the original project.

You have to be competent though, which is too high a bar for the majority of people who want to change things.


Why forking of an abandoned project a hostile takeover? If anything it's the opposite.


Centralised forges mean we can't really “just fork”. That worked when you got code from your friends-of-friends, so they could just give you the fork instead of the “original”, but now? How do you even find the current latest fork?


It seems easier than ever to me - you click the github Network link and see which fork has forks itself.

It'd be nice if github could show some activity metric alongside all those forks too I guess. If only they accepted pull requests!


If you're looking for a fork (and the fork is on GitHub, and it's been around for a few months), yes. But if you know about a fork, you're already ¾ of the way to finding it.

If you're getting software from your friends, generally they'll pick up the fork and you'll end up with it. If you're getting it from The Official Repository on the Centralised Forge, that doesn't happen.


>Centralised forges mean we can't really “just fork”.

How hard is it to make a public git repo?

Packaging your software falls into the 'being competent' pile that's too hard.


AI safety researcher here. I've interacted with several folks at OpenAI, DeepMind, and other such groups, but have no affiliation with any of them.

While I'm not privy to OpenAI's internal motivations, it's worth keeping in mind that as a nonprofit they face a pair of hard organizational constraints:

1) Training state-of-the-art ML models costs huge amounts of money. That means OpenAI's capex is forcibly high.

2) Their charter (https://openai.com/charter/) commits them to a "fiduciary duty to humanity", so among other things they can't accept donations which are subject to conditions that may be misaligned with their mission. That means OpenAI's effective funding pool is forcibly low.

The combination of 1) and 2) means that, absent an absolutely massive yet hands-off private donor — a rare thing indeed in the nonprofit world — it's hard to imagine a solution to this dilemma that doesn't involve some kind of for-profit spinoff. And the outcome here really is pretty binary: if you believe OpenAI will build safe AGI, then 100x returns for investors really are immaterial compared to the wealth they can expect to generate for everyone else. While if you don't believe they'll build safe AGI, then you must either believe that A) they'll fail to build AGI (in which case investors get nothing anyway); or that B) they'll fail to build it safely (in which case your concerns are presumably technical rather than organizational).

Incidentally, it's quite likely that in the beginning, OpenAI's own researchers were by no means sure they'd be able to get results like GPT-3 through scaling alone. But that turned out to be the world we live in, so they're now playing the hand they've been dealt.


This idea that either they build AGI or nothing seems like a false binary to me? Like other startups, there’s a chance that that they build something interesting and profitable that isn’t AGI. (I’m thinking that the odds of that are higher than those for AGI.)


I agree that's a legitimate third possibility, but I don't see it as likely compared to the others. Pivoting is extraordinarily hard when you have 100+ employees and a very visible public commitment to your mission. Given that OAI filters recruits partly for their alignment with the charter, a full pivot to, e.g., a for-profit SaaS or something seems pretty much out of the question — if only because they'd probably lose a ton of employees in the process.


The claim that OpenAI needed the money seems flimsy in light of gptj


A couple serious, unbiased questions:

1) How seriously is AI safety research taken by academia? Are there grants? Academic positions? Sponsored studies etc?

2) How much interaction is there people who study the long terms dangers of strong AI and people who look at the problems and biases of existing AI systems (say, analyzes of racial in AI bond-granting systems, etc).

...

Full disclosure, I have something of a love-hate relationship with "AI safety" positions. It seems you have a series of intertwined problems. So now some biased, editorializing points.

* Exactly what General AI is, what it can do and so-forth, is not known, not understood. So it's hard to either know or articulate potential dangers. Often the stand-in for this unknown is taking an AI as a god-like with an unknown agenda. This often results in contradictory and implausible worries that it's hard to take serious, so-and-so's basilisk , the "terrifying" that's Pascal's Wager warmed-over.

* A primary focus of AI research is "alignment" between AIs and humans. This has the fundamental problem that humans are often not aligned with other humans. Humans don't always engage in genocidal world wars but we certainly know times when they have. AI researchers like Paul Christiano don't seem particular concerned with problem of "aligning" humans with each other but seems like a pressing question.

* But let's take "AI dangers" seriously. Let's look at an analogous situation. Suppose there's a new kind of energy with the potential for massive payoff but with some kind of dangers (pollution, explosions, whatever). It's weird to say "to solve this danger, we'll create a safe form first". A rational response to some dangerous activity would be regulation for everyone creating it.


What does AI Safety mean in this context?


Ensuring AIs don't wind up wanting to destroy humanity.


“Want” is a problem for me here. GPT-3, for example, is just a dumb, brute force Markhov chain. Any danger it might pose is no more than that of a mechanical trigger that sits between a finger and gun barrel. The only real intelligence in the system, and where the only danger lies, is from the person behind the gun.


GPT-3 is a bit more than a Markhov chain. There is a lot more to it than just being big. Same for GPT-2.


As soon as you put the intelligence in AI this changes. It is easy to say that current AI systems are “dumb” (although note that the precise meaning of dumb has changed significantly over the last few decades). You can say that about any AI with sub-human intelligence. But if you reach human levels, you can likely reach super-human as well, so you need to start worrying much earlier.


GPT-3 is better than everything what was achieved so far in multiple metrics, though yes, it's of course not an "intelligence".

However, this is ignoring possible future developments - things may get better and more advanced and that's what AI-safety people are concerned about.


Wanting is the easier of the two - the other is ensuring AIs don't inadvertently wind up destroying humanity. The road to hell being paved with good intentions and all that.


There are other ways around the cost. They could build a distributed training system along the lines of Folding@Home, I’m sure a lot of AI folks would donate GPU time.


The optimization problem at the inner loop of these methods doesn't handle latency very well. This is covered a bit in the DOTA 2 rerun work [1] (e.g., Figure 5b on staleness).

So for both large models (which are split up amongst multiple accelerators but effectively one "computer"), as well as learning from "your friends" you want to be more tightly coupled than something like seti/folding@home. There are plenty of workloads that are great for embarrassingly parallel (and now "pleasingly parallel") workloads, but training a single model isn't one of them.

[1] https://cdn.openai.com/dota-2.pdf


Frankly, AI is going to have a hard time being open for several reasons:

- The computing resources required to train models are not distributed

- Training data will often contain licensed material

- Our digital concept of 'open' revolves around transparency, which is not readily available with conventional AI

That's not to say that we should give up on open efforts in the field, but we're still deep in the experimentation/research phase of artificial intelligence. Copilot has been an excellent demonstration of how poorly suited ML is for prime-time use, even by developers.


Indeed given these fundamental constraints, OpenAI is about as open as it can be short of simply giving away everything for free.


> simply giving away everything for free

Which is what DeepMind has done with the AlphaFold code (Apache licensed https://github.com/deepmind/alphafold) and published model predictions (CC licensed at https://alphafold.ebi.ac.uk/). I guess they could publish the weights but that would probably be useless since nobody else would be running the exact same hardware.


DM did publish the weights.


Easy to do when you're backed by Google.


Except, dictating who can and can’t use their models, or what you can and can’t say if they deign to give you access https://twitter.com/mtrc/status/1363893222993498117?s=20


Ok, so there's a lot of tension here:

1. A lot of people want these systems to be open, and don't want the power that comes along with them to be locked up in the hands of a few rich people.

2. But some people also think these systems are powerful and don't want them in the hands of bad-faith actors (spammers, scammers, propagandists).

3. A lot of people also want these systems to be weakly safe and not have negative externalities when used in good faith (avoid spitting out racism when prompted with innocent questions). This is already hard.

4. Even better would be for the system to be strongly safe and be really hard to use for bad-faith purposes, but this seems unreasonably hard.

5. It's often easier to develop the "unsafe" version of something first and then figure out the details of safety once it's actually able to do something. This is basically where OpenAI is now.

6. The details around liability for the harms caused by this kind of thing are not clear at all.

So OpenAI is in this position where it has built this thing that is not yet weakly safe. People have very different ideas about how potentially harmful this could be, ranging from very dismissive ("there's tons of racism on the internet already, who cares?") to the very not dismissive ("rich white tech people are exacerbating inequities by subjecting us to their evil racist AI systems!").

What should OpenAI do with this thing? Keep it locked up so that it doesn't hurt anybody? Release it to the world and push accountability onto the end users? Brush aside the ethical questions and use the hype generated by the above tensions to get as rich as possible? So far their answer seems to be somewhere cautiously in the middle.

My personal opinion is that these questions will be very important for real AGI, but this ain't it, so the issues may not be as bad as they seem. On the other hand, maybe this is a useful test case for how to deal with these problems for when we do actually get there? Also from past experience, it's probably not a good idea for them to allow open access to something that spits out unprompted racism. I would like to see OpenAI more open, but I also realize that it's very hard for them to make any decision in this space without making people unhappy and generating a lot of bad press and accusations.


By naming it "OpenAI" they've implied what their value system is, namely point 1 in your above list. Putting "open" in the name implies libre/free/open sensibilities, in that the harm from releasing technology that could be used harmfully is outweighed by democratizing the technology and allowing everyone to use it for a variety of reasons, including for good and to combat bad actors.

Openness, in the libre/free sense, is also making sure to minimize gatekeeping or putting the creators in a position of making judgements about what's good an what's not.

All the other points you list are ancillary. OpenAI is a prime example of "open-washing". OpenAI got good will from the community by implying they were open (free/libre) and then hid behind all the other points you listed to not commit to openness.

If they wanted to have a discussion about the moral hazard of AI and their business model was to create a walled garden where only approved scientists, engineers and researchers had access to the data and code, that's their prerogative, just don't name it "OpenAI".


Freedom of speech is in a fragile place in our culture if we start seeing certain words or opinions (even bad ones!) as "unsafe".

This is not a criticism, just an observation of where we're at and how dramatically attitudes have shifted.


At least in the US many words and opinions have been off-limits for a hundred years or more. Death threats are the obvious one, but various types of political or religious views have been off limits (which specific ones has varied of course). In the jim crow south you definitely would get beaten or worse if you espoused views of racial equality, and communist/socialist politics have at various points been enough to get you fired and blacklisted in many industries (still are, sometimes).


I mean, just 7 years after the Bill of Rights and First Amendment in 1791 were ratified, came the Sedition Act of 1798, which was a law, passed by Federalist Congress, prohibiting certain kinds of speech critical of the government. You might notice this being a thing the First Amendment said was precisely off limits. Didn't matter.

Free speech in the US has practically never been principled in support of those marginalized, but a tool for the wealthy and rich to maintain power.


"he result was a “capped-profit” structure that would limit the return of investment at 100-fold the original sum. If you invested $10 million, at most you’d get $1 billion. Not exactly what I’d call capped."

what bizarre writing style / logic. It is exactly what I would call capped, it's capped at 100x.


I don't think this is an unreasonable structure for a company genuinely worried that their technology will disrupt and replace a large fraction of the global economy.

If OpenAI genuinely produced a strong AI (I'm not saying they will, I'm saying they think they will) the investment returns would be 10,000x, or possibly uncountably more than that.


>The result was a “capped-profit” structure that would limit the return of investment at 100-fold the original sum. If you invested $10 million, at most you’d get $1 billion. Not exactly what I’d call capped.

What on earth would you call it then?


PR to make it seem like they're a non-profit while still making 100x returns legally possible. 100x returns are about as good as you can possibly get (and much, much, much better than normal or even great) so they're pretending to act like a non-profit while still being very much for-profit. They're unnecessarily complicating their legal structure just for some publicity brownie points, and they aren't really changing anything substantial. It's also an attempt to make it seem like "you're only going to make as much as 100x on this supremely lucrative investment opportunity!" to dupe the gullible as the market is frothy and the hype is extraordinary.

Better yet, I'd call it unnecessary bullshit.


Interesting. You are far more cynical about this than I am. I can't say I know their motivations, but it seems at least plausible that they could land on this structure for earnest reasons if they both: a) believe they need more money than could be raised as a traditional non-profit, but b) want to limit the power of investors in the case where they successfully develop a strong AI.


> Not exactly what I’d call capped.

In the context of a company trying to create an AGI, yes, this is capped.


Is there a version of SETI@Home for training machine learning models where the data/models are entirely open?


Batch Gradient descent can be parallelized the problem is that all the gradients need to be calculated for each step. So if one node is slow, the real time processing will be as fast as the slowest node.

Furthermore batchsize is limited to memory and training hyerparams are dependent to some extent on the hardware.

So training in the real world is done on parallel identical hardware nodes. Farming it out with current algorithms does not make sense.


Is the novelty of GPT-3 more about smart algorithms and experienced researchers or is it about beefy hardware/ASICs that do the number crunchings? Or maybe it-is-only-about™ gathering and labeling large amount of data for their trainings?


For GPT-3, they use the data everyone is using. Their cleaning is unique, but so is everyone's. They don't do anything special, but put in some more effort than most though.

As far as I can tell, it is a combination of smart algorithms, good engineering, and the hardware to make it happen. And scientists that had the right hunch for which direction to push in.


> Or maybe it-is-only-about™ gathering and labeling large amount of data for their trainings?

Well, GPT-3 isn’t a classifier and it isn’t using labeled data.

As an outsider it definitely appears that GPT-3 is an engineering advancement, as opposed to a scientific breakthrough. The difference is important because we need a non linear breakthrough.

GPT-3 is a bigger GPT-2. As far as we know, there is no more magic. But I think it’s a near certainty that larger models will not get us to AGI alone.


As someone in the deep learning community I disagree with your assessment that GPT-3 is a not scientific breakthrough. The GPT-3 paper won a best paper award at the most prestigious machine learning conferences after all. GPT-3 didn't make any modeling advances, but it introduced a completely new paradigm with few-shot learning.


Perhaps we have to distinguish between GPT-3-the-model and GPT-3 paper. IMHO GPT-3 as a model is straighforward engineering, putting a lot of resources in an oversized GPT-2; and while there's significant novelty in the "Language Models are Few-Shot Learners" paper about how exactly you apply these models, that is orthogonal to GPT-3-the-model, the scientific content of that paper applies to any other powerful language models and isn't intimately tied to specifics of GPT-3.

In essence, I feel that the same people introduced two quite separate things - a completely new paradigm on how to obtain few-shot learning from a language model in a way that competes with supervised learning of the same tasks; and the GPT-3 large model which is used as "supplementary material" to illustrate that new paradigm bit is also usable and used with the old paradigms, and by itself isn't a breakthrough. And IMHO when the public talks about GPT-3, they do really mean GPT-3-the-model and not the particular few-shot learning approach.


Those two are tied together because many of those few-shot capabilities only emerge at scale. If OpenAI had trained a large model and not analyzed it rigorously it would have very little scientific value. But it would have been impossible to have the scientific value without the engineering effort.


Agree, just because it took money and scale to do it doesn’t mean it isn’t a breakthrough.


Bigger models might get us to AGI alone. I say that because of the graphs in this paper: https://arxiv.org/pdf/2005.14165v4.pdf

Quality is increasing with parameters. Even now, interfacing with codex leads to unique and clever solutions to the problems I present it.


Not in the field, so genuine question: what is the evidence/theory to support the notion that deep learning is at all a reasonable route towards AGI? As I understand, this is nothing like how actual neurons work - and since they are the only "hardware" which has ever demonstrated general intelligence hoping for AGI from current computational neural networks feel like a stretch, at best.


Why is AGI the goal instead of continuing to augment human intelligence with better tools?


> OpenAI had decided to put emphasis on larger neural networks fueled by bigger computers and tons of data.

So is the answer here both?


To make a system noticeably better, it’s often easiest to make all components of the system slightly better.


But that is not what GPT-3 is. You could say that is what they did _given GPT-2_, but that is underselling the complexity.

Some scientist at OpenAI had the hypothesis "what if our algorithm is correct, but all we need is to scale it 3 magnitudes larger" and made it happen. They figured out how to scale all the dimensions. How fast should x scale if I scale y? (That is very tricky, as modern machine learning is basically alchemy)

And then they actually scaled it. That took a ton of engineering and hardware, for what was essentially still following a hunch.

And then they actually noticed how good it was, and did a ton of tests with the surprising results we now all know.


> still following a hunch

I'm not sure many would agree that the desire to scale was simply a "hunch".


It's mainly about their scale which is enabled by their hardware budget (355 GPU years to train it) and their budget for data acquisition / cleaning.

GPT-3 has 175B parameters. The previous largest model was Microsoft's Turing-NLG which had 17B. GPT-2 had 1.5B.


On one hand the author seems to make fun of the notion that language models could be dangerous.

But then a few sentences down the author complains:

"The company “in charge” of protecting us from harmful AIs decided to let people use a system capable of engaging in disinformation and dangerous biases so they could pay for their costly maintenance."

So the author doesn't even present a coherent position.


The difference is that in the former scenario, the one OpenAI is supposedly protecting us from, the danger is that OpenAI is "too good", possible of doing damage by supposed nefarious actors: something generating hundreds of fake news articles a second which are all perfectly believable.

The danger in the latter scenario is that OpenAI is garbage: when used to answer a perfectly innocuous request, it regurgitates disinformation and biased results. OpenAI can't gatekeep for "bad" people since it's actually good people interacting with dangerous AI. Like when asked to solve a simple math problem, it just gets the math straight up wrong, but people trust it.

OpenAI wants us to believe its AI is so good it needs to be protected from "bad people" and built its safety protocols around that threat model, when it seems far more likely the latter is going to happen, and they haven't done a thing to protect us from that.


> So the author doesn't even present a coherent position.

Maybe the author is a bot.


More could be done to fund research that showcases the importance and value of transformative models, so that countermeasures to existential threats have time to be developed. So long as there is a practical strategy for mitigating the philosophical problems such as software licenses and risk-managing existential threats to public safety, I think OpenAI deserves the liberty to take a profit. But exclusively licensing such models to corporations is very harmful to liberty and continual exaccerbation of the imbalance of economic power will lead to humanicide. All people need to benefit from this technology, or the people that are ignored will suffer from it.


Anything they do other than writing a morally-incentivized governance system for reinforcement models (aka pushing an API for a trillion param NLP model) is just fake.


Could you expand on this? I can't really parse it; the first thing that came to mind was a DAO where token holders could vote on...something related to parameters and fine tuning, but idk what.


safe reinforcement learning that won't turn the universe into a paperclip factory


> 8-minute read

I got just one paragraph. Where's the rest of the article?


It didn't work in Waterfox with NoScript for me. It did work in Firefox (with uBlock enabled).


I think one of the only second order effects of wokeism is the fact that many times it gets results.

“Call out culture” runs the gamut from obvious to pendantic, and, more often than not, the result is a net positive.

So, my question is: Why isn’t there a concerted effort from [OpenAI’s] like-minded contemporaries to call them out on their incredibly embarrassing name?

To all the OpenAI engineers, investors, and onlookers: there is _nothing_ “open” about your platform. Am I stupid or are your efforts just casually misleading?


I often see people calling them out for being 'ClosedAI' instead of OpenAI, so I do think they are being/have been called out by the ML community.


I’m not seeing the connection?


If they started out as an MSFT venture, they probably would be called something else.

The reality is, they have a lot of mouths to feed, and the money was going to have to come from somewhere.

I think it's fair to be cynical, but the name itself is an artifact of history.

Perhaps if they were more truly authentic, they'd have nudged the name as well, but I think the concerns should most be over the materiality of their actual 'openness' in operations, not so much the name.


There is, isn't there? You're responding to an article roasting OpenAI for not actually being open. They seem to get plenty of bad press. Is it just not enough is coming from other AI research labs for your liking?


Artificially Open Intelligence


The 'results' are more plain to see in callout culture, but the negative effects are usually systematic and hidden.

In entertainment - we don't see all the scripts that were given a pass, the actors that were not considered because they were going to cause controversy by being the wrong ethnicity/gender, comedy writing rooms in particular are generally 'anything can be said unsafe spaces', which is an important part of the culture of good writing that has a questionable future.

If you go and have a look at films made just between 2000-2015 - you can see how many of them - even though they don't rise to the level of controversy today, and are probably not inherently controversial - would not get made today.

'The Last Samurai' with Tom Cruise - some could argue it's 'appropriation' I think many would argue that it's not, it's not really a 'White Saviour' film either. Though it shouldn't be objectively controversial per say, though maybe fodder for discussion, it probably would not get made because it just has the potentiality for ruin: there is just far, far too much risk in an escalating cascade of populist voices calling the film all sorts of things with or without legitimate grievance it won't matter - the mob rules.

Tom Cruise has 10 projects on the drawing board, and he's getting pitches daily, so there's going to be 'a different film' without the risk profile to his personal brand.

Studios (and BigCos) are risk averse and so what we see is a 'watering down' effect across the board.

Conan O'Brien had Sean Penn as a guest on his podcast and has some thoughtful things to say about it.

So does the analogy hold for capitalist control of Open AI? Possibly.

MSFT might be able to push for some 'results' which are seemingly more obvious and public, but the systematic effect of 'closed' and 'proprietary' results in stifled innovation and opportunity otherwise.

It's actually an interesting analogy but I think it's probably good for maybe reasons you might not suspect, and that is, the big part of the iceberg that never gets seen.


Their mission of making AI available to a broad section of society vs concentrating power among a few special interests justifies the name as far as I’m concerned. The name open does not have to mean “everything is FOSS and available on GitHub” in my opinion.


A black box model, a subscription fee for API access and an exclusive licensing deal with Microsoft doesn't sound very open

I mean, you can make a better argument for Windows being "open". Available to a much broader section of society, as an installable product not API privileges that can be withdrawn, and a lot more of what's under the hood is exposed, documented and moddable.


But they keep their models secret. How is that "making AI available to a broad section of society"?


“Through licensing!”


I don't know what their mission used to be, but now their website says "Our mission is to ensure that artificial general intelligence benefits all of humanity." -- quite a bit difference than access!


The same issue came up regarding Reddit recently. If you build something cool that becomes popular then big businesses will try to buy you. I think if you want to be taken seriously as a non-profit and have people contribute to your cause on that basis, then you should be able to guarantee that you will remain a non-profit, and free from external control.


I think that article does explain that it was basically a “sell” out of necessity:

> OpenAI was an AI research laboratory. But its ambitions were simply out of reach for the resources it had access to.


That is the most distant definition from "necessity" i can think of.


> That is the most distant definition from "necessity" i can think of.

It is a sign of the rapid transformation of our world that the above statement is simultaneously true and completely missing the context here.

Why? Think about the context of OpenAI's founding. As I remember it, OpenAI wanted to offer an alternative locus of power compared to Google and Facebook around AI. They wanted to have the spotlight to talk about broader topics and share advancements more broadly.

To accomplish that mission, there are many hard compromises that have to be met.

To be clear, I'm not assessing how well OpenAI did. I don't know the right answers.

Rather, I'm pointing out the constraints in their space are substantial. I don't think anyone can dispute the upsides of having a large budget -- to hire in-demand experts and train expensive models. What are the best ways to acquire this funding? I think there are many thought experiments to conduct and cross-comparisons to make.


According to the article they have decided that the way forward is by going with “bigger and better” models, which necessitated the need for more resources.

I’m going to trust them on their reasoning, but with that premise what are the options here if they lack required resources?

If they to avoid what is dubbed a “sell” option to private money, the other option is to get some public funding or to be honest about the issue and close the shop.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: