> Its paid features are always enabled when completing Rust code, in acknowledgment of the fact that TabNine could not exist without the Rust ecosystem.
Think of it like "I couldn't have done it without you" when people accept awards. Sure they could have, it's just an expression of appreciation.
He might also not have chased the project if he weren't working in an ecosystem he particularly liked, so who knows, maybe it wouldn't have happened without Rust.
Sure, makes sense. The original argument was that an ecosystem like Python's could have been a possible substitution for Rust, as an example, and therefore it was a baseless statement. Think the analogy with yours would be an equally good screenwriter or scientific collaborator. You need one, but technically maybe not exactly the one you had.
That said, not trying to undercut anyone's appreciative statements, especially the one from OP in his repo towards Rust! Obviously people (and programming languages, for that matter) aren't simple drop-in replacements for each other in real life. You have to be inspired and empowered by them.
That personal motivation or rapport component is probably what was being missed in the post to which I replied.
Yeah, you're right. Probably I didn't get the correct linguistic context.
But you never know, maybe in another language he would have done a much better project, maybe not. Actually, in alternative universes everything would probably be different. If Python didn't exist this project would maybe not exist too, and would be at least a little different in any case, so he could have opened the full version to Python codebases too, and that applies to all other languages.
And Rails could have been written in C. The reason why it wasn't should be clear. Different languages enable different ways of thinking, and what is hard to express in one can come out easily in another.
This is valid for both programming and human languages.
Not everyone has infinite time for development. If Rust was the only language they could get to the performance they want, with all the features, in the time available then the statement is valid.
Note that they talk about the ecosystem, not just the language. If you want a high performance, compiled language, with a good ecosystem of packages that you can leverage then Rust is a great choice. Arguably Go is the only other language that would fit the bill, but for some people the simpler type system doesn't allow the abstractions they want.
I meant he could have said: "Because I love Rust, I'm giving this for free to be used in Rust codebases", instead of "this would never exist if not for Rust".
Or maybe he's serious, maybe the Rust ecosystem literally saved him from death or something like that.
Ah, got it! You were being literal and he was not. That's where the confusion is coming from. Your comment came across as mildly aggressive which is probably why you're getting downvoted, but you were just pointing out that what he said made no sense in literal terms which is true.
I don't think it was intended to be interpreted literally. The intent and the feeling behind what he was communicating was represented clearly.
The saying is perfectly standard and meaning has been granted equivalence to your alternative, through common usage. It’s really not a point worth making, or arguing.
Alternatively you could read it as “I wouldn’t have bothered doing this if rust wasn’t there.”
...as risky as installing a proprietary editor plugin which updates automatically, yes.
Also, AFAIK most understandings of MIT, BSD, and Apache 2.0 licenses require you to acknowledge the copyright holders of the source code you compile into your binary, even if the licenses permit binary distribution. I can't find your "Copyright (c) 2018 Tokio Contributors" or "Copyright (c) 2014 The Rust Project Developers" that I'd expect based on `strings TabNine | grep github`. Maybe you've got a lawyer that suggests otherwise? Your plea of "trust me, I have good hygiene" carries less weight when I have to `strings` your stuff to know what shoulders of which giants you're standing on.
> ...as risky as installing a proprietary editor plugin which updates automatically, yes.
Can't you make the same complaint about any auto-update functionality in any software? Even if it's BSD licensed, you're still counting on whomever has authority to push an update to not push malicious code.
This doesn't seem to have anything to do with the fact that his code is proprietary nor his monetisation strategy, so why are you singling him out for those?
Proprietary - Can't patch out the autoupdate, which I might be tempted to do if something else in my toolchain did things at someone else's leisure.
DRM/monetisation - the product as of my comment didn't seem to acknowledge the open source works compiled into the binary, and I didn't think that was a good look for someone with the authority to push out malicious code.
As risky as ones that don't update automatically either. Just because a plugin doesn't update automatically doesn't mean it doesn't still have the capability of doing network access. Unless you're actually sandboxing all your IDE plugins and denying most of them network access (and verifying on every new IDE plugin you install whether it's allowed network access), but I don't believe that's how IDE plugins generally work.
MIT only requires source attribution. It's the BSD licenses that require attribution for binary forms of redistribution. Still, it is good manners and good cover-your-arse practice to attribute whatever free software work they used (Google does this with their giant "open source licenses" page).
Well, you could argue that the notice is "present" (in a very esoteric sense) in a binary distribution of the software because it was present in the source code used to build it. You could also argue that a compiled version of a program isn't a "copy or substantial portion" of the Software (compilation is effectively a form of translation, which is a derivative work under the Copyright Act in the US -- and not just a copy).
Personally I would still include it in both, but I always had the impression that MIT was looser than BSD-2-Clause about this. BSD-2-Clause explicitly states that binary distribution needs to include the notice in "the documentation and/or other materials provided with the distribution", and I have a feeling that the license authors might've had a reason to want to be explicit about it.
Does the Vim version autoupdate? I'd rather it wait for me to run my plugin manager- I specifically don't want anything on my machine to update when I'm on-call or traveling.
Wait, is the auto-update all that's needed for network? I assumed it was license validation or something. If it's just updating, couldn't you provide a different method of updating like manual update checking, and then peoples concerns would be solved?
> Finally, TabNine will work correctly if you deny it network access (say, by blacklisting update.tabnine.com).
Just to clarify - would it still work if I deny network acess for the TabNine binary, _after_ validating my license key? Or is the key validation invoked on every launch (hence requiring network access)?
I agree with your concerns - I wonder what could be written to alleviate them? This brings up an interesting problem.
Ie, could we write a monitoring proxy where if enabled, all traffic goes through this proxy. This proxy enables the end user to monitor 100% of traffic, all http requests, and could even have a secondary documentation flow that explains the I/O for security minded individuals.
Then you'd shut off remote network access to the binary, monitor all traffic, and feel secure knowing that it's only sending what it says it's sending, and why.
With that said, I imagine you could do the same thing with a sniffer. Perhaps a documentation standard could be built into request/responses, so a monitoring program like Wireshark could snuff the I/O and see what it is.
Do you have any thoughts on how someone could both network-license, and make you feel secure in their I/O? Ie, no trust needed?
I don't think a DRM solution that is both robust against an adversary and inspectable by a stakeholder can be engineered. Software can't look out for both the person running it and the person selling it simultaneously when their needs are mutually exclusive. Cory Doctorow has some eloquent content on the topic, ie at [0].
In this particular case, the use of TLS (good!) makes it relatively challenging to inspect. Assuming the author isn't shipping a cert in his binary (doesn't look like it) - I'd have to spinup a new VM, load a custom root cert, and mess with a TLS terminating proxy / forwarding solution, and hope he's not using a secondary stream cipher on top of TLS. Maybe I get lucky and https://mitmproxy.org/ or something just works out of the box. In any case, lots of effort to know he's not siphoning up all the source code on the local machine and using it to train v2 of his project. And the more robust the DRM solution, the less feasible it is to inspect.
If the amount of traffic is predictably small though, you can be confident that it’s not uploading the entirety of your source code, so perhaps some mechanism to estqblish that would help?
A combo of two applications: main app and network agent. Main app writes to a file with request, registration check or update, in JSON or other text-format for user inspection. It loads the agent which reads same file, applies operations, sends them to 3rd party, and writes result into another file. Main app reads that the second it appears. To keep it simple and not have to delete, the files might be numbered with old exchanges kept unless admin/owner deletes them.
With such a setup, users can see exactly what data is outgoing, have a reasonable belief they know what's incoming is harmess, main app gets no network access, agent has no access to secrets/system, and agent can be open source (entirely or mostly).
So, there's a quick brainstorm from how I did privilege-minimization for high-assurance security. This is basically a proxy architecture. That's a generic pattern you can always consider since it can help protect lots of risky apps both ways.
I wish someone would figure out the right UX for partial autocompletion. e.g. I type "wo" and my phone suggests ("would", "work", "wonder"), there should be an easy way to say I'm trying to type "working" rather than clicking the "work" autocomplete then backspace, then "ing".
I'd imaging TabNine has this problem in spades, since it does such long autocompletes. It could suggest "unsigned long long" when I've typed "unsi" and I really want "unsigned long int". Seems like a tough UX problem. ¯\_(ツ)_/¯
Xcode has handled this for years. In Xcode, when autocompletion is presented, hitting Tab will complete the longest unique prefixed subword for the currently-selected tab item. If this results in only having one completion option left, then it completes the whole thing (e.g. adding method arguments and whatnot). Similarly, hitting Return will just complete the whole entry instead of the longest unique prefixed subword.
By that I mean if you have 2 autocompletion options `addDefaultFoo()` and `addDefaultBar()`, and you type `add` to get those options, hitting Tab will fill in `addDefault`, and then hitting Tab again will fill in the rest of the selection.
The longest-unique-prefixed-subword is the completion that bash (and tcsh and many other shells) have had for ~30 years now. The non-uniques are listed on the 2nd tab.
Sounds like what you want is fuzzy searching (say fzf [0]) over autocomplete suggestion results. You could type the prefix, and then fuzzy search by typing the suffix to get your desired word (while letting autocomplete fill in the middle of the word).
Fwiw, if there are competing `unsigned long int` autocompletes, it looks like it will shorten the recommendation to `unsigned long `, which is really neat.
This is just based on the site, I've not tried it yet. YMMV
UX-wise holding tab would be the best, meaning tab => use completion (like it works now), holding tab => use this completion but show me further possible completions of that word; if it doesn't have any just keep the caret there (for me to finish it writing manually)
That can lead to ("Work", "Worry", "Word") so you'd then have to type the 'k'. Now you could have ("Work", "Worker", "Worked") and still are missing the variant you want.
It'd be nice to long press "Work" at step one, get that completed without a space being inserted, then tap 'i' to get ("Working", "Workings", "Workingmen")
Whilst they're doing that how about adding caret-placement sensitivity:
When I click just after the initial letter (pipe representing caret that would be) eg "w|orking" the chances of me wanting to type "worked" are pretty slim; instead it should offer "Dorking" (a UK placename), "borking" and such; according to my frecency scores.
Similarly if I click to place the caret at "work|s" I'm probably after "words" or "worts" (beer stuff), or similar. Again "working|" and I'm probably going to change to a different suffix - works, workers, worked.
I'm amazed that gboard (Google's Android keyboard) doesn't already do that? Perhaps I missed a setting.
It turns out that iOS prediction will make a provisional guess based on what you typed and will go back and adjust its autocorrections as you type subsequent words. You can see this more clearly if you use dictation, but allegedly it won't do as well if you use the word corrections every time.
The way I use autocomplete is that I type the entire word I mean really quickly. I get most (or all) of it wrong, but the autocompleter has enough information to substitute that with the correct word. It's much faster than the read-evaluate-correct loop you're describing.
I've been using TabNine for a few weeks, and it's really cool how well it works. My first "woah" moment with it was writing a function where the first thing I wanted to do was take the length of the array, and once I started typing
def foo(bar):
n
it suggested the entire completion of "= len(bar)". It has a really cool way of picking up your coding style that makes it stand out to me.
Thinking about it more, I wonder how useful that type of autocompletion is for those who can type fast. I wonder how much time it takes my brain to context switch away from "code authoring and typing mode" to recognize the " = len(bar)" in the autocomplete options list. It seems like it would be faster to just type out the " = len(bar)" for those who type a solid 60+ words a minute?
I'm trying it out now. If it works well $30 is nothing for this magic. Especially in VSCode, my favorite editor. I have a problem with many languages not having the support I need. And I also don't have the best memory, so autocompletion makes me much faster and costs me less frustration with Googling.
Played with free for a bit, 200KB is quite a bit low, didn't get any completions. Purchased the premium licence. Gotta say stripe integration is very smooth.
Overall after a couple of hours of playing with this. My mind is quite blown away. This is absolutely amazing.
Hopefully Microsoft or someone acquires this technology for a fat sum and open sources it.
I've thought about code completion smarts for a long time. You actually executed and delivered a product. Kudos! Take my money!
I suppose because it's worth trying and the price is not unreasonably high.
But the 15MB indexed limit to me seems strange on premium, as others have mentioned.
I was using it on a large project so 15 MB got me no completions on the files I cared.
$30 is a pretty cheap price for a pattern based completion engine.
It’s the first time I’ve seen it work well. It was completing fairly long statements and I was pleasantly surprised how close the first few results were to what I wanted.
The whole configurationless, all language completion using pattern analysis and fast index lookups in a very easy to install delivery is great execution.
This are the kinds of little small things that make me think “why didn’t I do this?”
I wish the author gave, 30 day free premium trial. A lot people would be willing to spend on the license IMHO.
I'm trying this with VSCode and C#. It's quite neat, though no doubt it'd be even better with a dynamic language.
My main issue is when I type a '.', the C# extension gives me an accurate list of members, but TabNine intersperses its own guesses, which are often wrong.
Possible fixes or mitigations (VSCode API permitting):
- After a '.', discard the TabNine completions whose prefix doesn't match one of the C# completions.
- After a '.', discard all one-word TabNine completions.
- Give all TabNine completions a different icon and maybe sort them all at the top or bottom.
>TabNine is 11,000 lines of Rust.
In recognition of the fact that TabNine could not exist without the Rust ecosystem, TabNine's paid features are always enabled when completing Rust code.
First impression is that this is insanely fast and is actually giving recommendations based on context, without setting up additional files. So, it's doing exactly as advertised.
I'm using this in Vim and would like to know if there's a way to configure it such that the dropdown does not show up until I hit <C-n> or <C-p>? I realize that this is supposed to be a zero-config tool, and I'm asking for a configuration!
Great job with pricing as well. Going to use this for a week before I commit to the license but $29 is a no-brainer for how much use I'll get out of this autocomplete.
Thank you for this. The deoplete defaults are much more sane than ycm. I've started using your plugin. I'm glad to see the author of tabnine has asked to feature it on his site.
I've prototyped something like this in the past using n-grams and it was surprisingly effective. When I think it gets really interesting is if you marry ML/NLP tactics with traditional static code analysis.
So you can imagine the ML engine generating the suggestions with the static analyzer ranking the suggestions intelligently.
It's kind of similar to the original AlphaGo where you have the model generate the potential moves that are then ranked by the Monte Carlo Tree Search algorithm.
This is extremely cool. Emacs has a similar, though less intelligent, language agnostic auto-complete function called hippie-expand[0], which has generally been good enough for my needs.
TabNine will still work on projects of 15MB or more, it will index the 15MB of files that are most relevant to the files you are editing (determined by distance in the directory tree).
The limit exists because otherwise latency or RAM usage might be too high.
A configuration option would indeed be nice. I have more than enough memory for tools providing a high value for my daily work. Kudos for setting same defaults!
I was wondering the same thing. I have a huge project that I would love to TabNine. Since they are still on beta, it's possible to be a product limitation rather than a business limitation.
Nice. I had the same idea a while ago [1], but I didn't make it very far. Good to see that the concept of applying ML to intellisense can actually be useful.
Can't wait to see this for Intellij, looks really cool but I am fully bought in on IDEA and while I have used Atom/VS/Sublime/etc this isn't enough to give up everything else I get in IDEA.
Looks neat, will give it a go. I think you may be in violation of the GPL for your vim plugin since you are creating a combined work but are not releasing the TabNine source code under GPLv3.
It looks like it is probably OK. The vim plugin it is based on seems to have already been designed to run using a client/server architecture. The plugin is the client, and it gets its completions from a server.
He just changed it so that it uses TabNine as that server.
Not cool in my book regardless of legality. Rebranding it to tabnine-vim alone is confusing, since none of the legwork for vim support belongs to TabNine. At the very least the original copyright notice should be left intact in the README (iiuc this is required by GPLv3).
It includes a copy of GPL. The README tells you what it applies to, and what it does not. It tells you where to find the original project it was forked from. And all the files that the TabNine guy did not write contain their original copyright notices from the YouCompleteMe authors.
Let me be not the first to say ... nice! You've ticked a lot of boxes for me, Rust to boot. And the price is reasonable. I echo some of the privacy concerns, but I am not a purist who will not use proprietary development tools -- many of which are from small shops. I had questions that I'm sure I'll get answered after I install the trial extension:
I noticed in some of the examples that the autocompletions were multi-word (for the language involved). This makes sense and I have no real problem with that it limited cases. What I wonder is have you found any issues with autocompletions resulting in less DRY code?
- and -
Since it's not parsing, is it possible to tell it to not show autocompletions based on a pattern? This is no deal breaker, it just annoys me when code comments accidentally invoke the drop-down and I'd imagine that a similar problem could happen with strings.
Awesome plugin Jacob. I have a question about the full version, is it per editor? Eg., I use Sublime Text most of the time, but occassionally vim, do I need to buy 2 licenses?
Also are these licenses transferrable between machines (work vs home)?
I've been using TabNine for a couple months now and it's been really great. It "just works" and I don't ever have to worry about it even when opening large projects. It's always fast and high-quality. It really feels like it's just part of Sublime Text in a way that's very rare for a plugin.
> TabNine builds an index of your project, reading your .gitignore so that only source files are included.
Heads up, it's not necessarily uncommon for JS developers to include node_modules in their git repos. If you're developing something like an Electron project or a website instead of a library, it's even sometimes advised to do so -- there's a line of thought that your static dependencies should be tracked as part of your source control.
It might not be a terrible idea to have an alternate config for this that allows excluding other directories. Even if a developer doesn't include their dependencies, they might have old code that they don't want integrated into their suggestions if they're in the middle of a refactor or something.
It's become less popular with the introduction first of ``shrinkwrap`` and then ``package-lock.js``. At one point in time, it was recommended behavior in the official NPM documentation for site deployments, because there wasn't a way to checksum dependencies.
They've since switched to recommending private repositories like Artifactory instead, which to be fair is usually better for very large organizations nowadays. But that wasn't always the case, and even as recent as 2013, it was the flat-out prevailing advice from package managers like Bower, and there are organizations who are still using and maintaining codebases that were set up in 2013.
You won't see a lot of projects on Github that rely on it, because:
A) Usually Open Source projects are designed to be built on multiple environments/OSes.
B) The majority of Open Source Javascript projects are designed to be installed via NPM anyway, and of course you wouldn't include dependencies for something like that.
However, you want to be careful not to make the mistake of assuming that every project has the same concerns as a standard Open Source project. Especially if an Org is going all in on standardizing dev environments through Vagrant or Docker, the question becomes, "why would you want an extra checkout/build step on top of that?"
to handle a static state of dependencies, usually a package-lock or a yarn.lock file is committed to the repo. That is the usual way to freeze the dependency tree.
Freezing a dependency tree isn't the point. The point is to avoid making a network request and to know that your dependencies will still be there 5 years from now.
Remember that one of the benefits of Git is that it's distributed. Even if you are hosting your own npm mirror, relying on it gets rid of that distributed advantage. It doesn't help you to be able to clone from the person next to you if you can't build without making a network request.
I'm not saying that this should be the norm for everyone. It obviously shouldn't be the norm for libraries. But it's not inherently a crazy or harmful idea.
Depends on if you want to bother setting up Artifactory. The problem with having your dependencies outside of your project directory is you're now relying on a network request and a build step to get your stuff up and running.
It's obviously not right for every project, I wouldn't classify it as default behavior or even a standard behavior. But if you're already using Vagrant/Docker to standardize environments across your entire stack, there's an argument to be made that there's really no need to not to have your dependencies precompiled and local to the project.
If you can get rid of complexity, it's worth considering whether or not doing so might be a good idea. Across standardized environments, fetching dependencies is extra complexity.
Afaik, Most language communities with a package manager are fine with the network request, since it should really only occur on initial pull, and library updates; not sure what they do with vagrant, but i imagine just keeping the libs locally and copying it in on vagrant build.
Eg in pythonland, I’m pretty sure I’ve never seen a repo with packages stored in the repo.
So what happened in jsland that makes the difference?
Python installs its packages system-wide with pip, so you'd never be able to commit those. The default for Ruby gems is also system-wide (although it seems like members of the community are starting to shift away from that).
Node installs packages locally to the project itself. This was partially a direct response to languages like Ruby and Python; the early community felt like system-wide dependencies were usually bad practice. So you can install packages globally in Node, but it's not the default.
When you move away from global dependencies to storing everything in a local folder, suddenly you have the ability to commit things. And at the time, there weren't a ton of resources for hashing a dependency; managers like Yarn didn't exist. So checking into source turns out to be an incredibly straightforward answer to the question of, "how do I guarantee that I will always get the same bytes out?"
People are free to fight me on it, but I would claim that this was not particularly controversial when Node came out, and it is a recent trend that now package managers are advising Orgs to just use lockfiles by default. Although to be fair, a lot of the community ignored that advice back then too, so it's never been exactly common practice in Open Source JS code.
>Python installs its packages system-wide with pip
Standard practice atm is to install packages locally to a project by using venv, or rather pipenv. Afaik, lockfiles remain sufficient. I assume ruby is in a similar state, but im not familiar with its ecosystem
>And at the time, there weren't a ton of resources for hashing a dependency
I suppose that’d be a big reason, but isn’t that basically equivalent to version pinning? (Whats the point of versioning, if multiple different sources can be mapped to the same project-version in the npm repo?)
It seems odd to me because it seems like it’d screw with all the tooling around vcs (eg github statistics), conflates your own versioning with other projects, and is the behavior you’d expect when package management doesn’t exist like in a C/++ codebase.
rust/python/ruby/haskell don’t see this behavior commonly, specifically because utilizing the package manager is generally sufficient. 62That njs would commonly only use npm for the initial fetch seems like a huge indictment of npm; its apparently failing half its job? It seems really weird to me that the js community would accept a package manager..that isn’t managing packages.. to the point that adding packages to your vcs becomes the norm, instead of getting fed up with npm
Adding to it is that, afaik, package management is mostly a solved problem for the common case, and there are enough examples to copy fron that I’d expect npm to be in a decent state... but apparently its not trusted at all?
> Standard practice atm is to install packages locally to a project by using venv, or rather pipenv.
Thanks for letting me know. This is a good thing to know, it makes me more likely to jump back into Python in the future.
I suppose it is to a certain point an indictment of NPM, certainly I expected more people to start doing this after the left-pad fiasco. But it's also an indictment of package-managers in general.
So let's assume you're using modern NPM or an equivalent. You have a good package manager with both version pinning and (importantly) integrity checks, so you're not worried about it getting compromised. You maintain a private mirror that you host yourself, so you're not worried that it'll go down 5-10 years from now or that the URLs will change. You know that your installation environment will have access to that URL, and you've done enough standardization to know that recompiling your dependencies won't produce code that differs from production. You also only ever install packages from your own mirror, so you don't need to worry about a package that's installed directly from a Github repo vanishing either.
Even in that scenario, you are still going to have to make a network request when your dependencies change. No package manager will remove that requirement. If you're regularly offline, or if your dependencies change often, that's not a solved problem at all. A private mirror doesn't help with that, because your private mirror will still usually need to be accessed over a network (and in any case, how many people here actually have a private package mirror set up on their home network right now?) A cache sort of helps, except on new installs you still have the question of "how do I get the cache? Is it on a flash drive somewhere? How much of the cache do I need?"
If you're maintaining multiple versions of the same software, package install times add up. I've worked in environments where I might jump back forth between a "new" branch and an "old" branch 10 or 15 times a day. And to avoid common bugs in that environment, you have to get into the habit of re-fetching dependencies every checkout. When Yarn came out, faster install times were one of its biggest selling points.
I don't think it's a black-and-white thing. All of the downsides you're talking about exist. It does bloat repo size, it does mess with Github stats (if you care about those). It makes tools like this a bit harder to use. Version conflation doesn't seem like a real problem to me, but it could be I suppose. If you're working across multiple environments or installing things into a system path it's probably not a good idea.
But there are advantages to knowing:
A) 100% that when someone checks out a branch, they won't be running outdated dependencies, even if they forget to run a reinstall.
B) If you checkout a branch while you're on a plane without Internet, it'll still work, even if you've never checked it out before or have cleared your package cache.
C) Your dependency will still be there 5 years from now, and you won't need to boot up a server or buy a domain name to make sure it stays available.
So it's benefits and tradeoffs, as is the case with most things.
I understand that the tradeoffs exist, my surprise is mainly that would be an uncommon workaround in pythonland for workload specific tasks (eg most projects dont have differing library versions across branches; at least not for very long) is common practice in jsland
Although one factor I just realized is that pip also ships pre-compiled binaries (wheels) instead of the actual source, when available. Which would generally be pretty dumb to want in your repo, since its developer-platform specific; assuming js only has text files, it would be a more viable strategy in that ecosystem to have as a common case
Regarding B and C, its not like you’re wiping out your libraries every commit; the common case is install once on git clone, and only again on the uncommon library update. A and C is a bit of an obtuse concern for most projects; I can see it happening and being useful, but eg none of my public project repos in python have the issue of A or B(they’re not big enough to have version dependency upgrades last more than a day, on a single person, finished in a single go) and for C, its much more likely my machine(s) will die long before all the pypy mirrors do;
Which I’m pretty sure is true of like 99% of packages on pypy, and on npm; which makes the divergent common practice weird to me. It makes sense in a larger team environment, but if npm tutorials are also recommending it (or node_modules/ isn’t in standard .gitignores), its really weird.
And now that you’ve pointed it out, I’m pretty sure I’ve seen this behavior in most js projects I’ve peeked at (where there’ll be a commit with 20k lines randomly in the history), which makes me think this is recommended practice
At one point, it did not, and the default behavior when one did npm install was to use quite permissive packages.json rules that allowed minor and patch updates. I remember being bitten by this a few times years ago, particularly when semver was more poorly understood.
Lot of surprise about something that I thought was not particularly controversial to say. Google has been using vendored dependencies in version control for years[0]. It's also going to be the default behavior in Jai[1].
Is there something I'm missing that makes those examples particularly abnormal? Has consensus radically shifted since the last time I looked into this?
Sounds like a great tool, I got two feature requests:
* make .gitignore logic optional, we always have the system we’re working with ignored and only include our extension but we really need autocompletion from the whole project
How does TabNine work for all languages? Curious about implementation details and use with dynamic languages. I would HAPPILY pay $29 if it works well for Ruby.
I've been using it for Python mainly but I find it's really helpful. It can often infer arguments for functions or functions to use based on the variable names. I've used Ruby lots in the past and I think it would work just as well based on my experience. I would give the free version a try and see what you think.
> When using the Sublime Text client for TabNine, the keyboard shortcut to select the ninth suggestion is Tab+9.
Before reading that I thought it was loosely named after the old T9 (Text on 9 keys) predictve system for mobile phones with numeric keypads only.
That being said, while I'm still passively learning to code and may not need a full license yet, it's a well-priced gift idea for friends who are full-time developers.
I just installed it in Sublime Text 3, TabNine seems to expand the first autocompletion only if the character to the left of cursor is not a whitespace, i.e. if I'm typing "let v = |" (where | is a cursor) and TabNine shows me a list of autocompletions, I press Tab key and then \t is inputted instead of first suggestion.
Any thoughts on how this performs vs deoplete? I've really enjoyed deoplete. It makes my coding quite a bit faster. However I've recently become pretty frustrated with all the gocode forks and go module interaction, so there's definitely room for improvement.
Tabnine and something like deoplete are not directly comparable in my opinion. Deoplete is a completion framework (with dictionary based language specific systems) and tabnine is an intelligent language-agnostic completion system. You could theoretically have Tabnine support deoplete (it is currently YCM based). As the author mentioned in an other reply, dictionary based completion systems are good for api exploration, while tabnine is for more contextual completion.
Deoplete is not dictionary based. Deoplete sources are python classes that yield lists of 'matches' (some are language specific and some are not). You can make it give you back quite about anything.
(Disclaimer: I wrote deoplete's original version of the "file" source, which completes file paths).
I must say that I never found a "clever" autocomplete that really suited me, I just ended up using a rather dumb "hippie-expand" in Emacs that basically tries to complete the word under the cursor using anything it finds in the current file or, failing that, any other open file. It's very dumb but it works regardless of language (including completing plain text in emails for instance) and it's predictable.
I'm pretty interested in your project, the way it seems to be able to learn from the way you type matches my workflow better than the usual "clever" auto-expander. I also have no issues paying for good tools (and $29 is really negligible as far as I'm concerned when it's for productivity tools, my keyboard alone costs an order of magnitude more).
However, and I know I'm probably in the minority here, I won't even consider using your program if I can't get the source. I'm not even asking for a FLOSS license or anything, even if it just came with a tarball that I can't redistribute I would consider it. But as it stands I would be completely relying on you maintaining the code and porting it to whatever platform I may want to use later. As it stands for instance it seems that you don't provide binaries for the BSDs: https://github.com/zxqfl/tabnine-vim/tree/master/binaries/0.... . I'm sure I could get it to work with Linux binary compatibily on FreeBSD but why even bother? What if Apple releases an ARM-based desktop a few years from now and you've stopped maintaining your project? Then I have to replace it with something else if I have to code on a Mac. The price is a non-issue but having to work around the closed source nature of the software is not something I want to bother with.
Again, I know that I'm probably in the minority and that many people on HN have no issues using mostly closed source development stacks but I genuinely wonder if you'd have much to lose if you kept the same business model but provided the source. I mean, if people want to pirate your program I'm sure they'll find a way even if it's just the binary, so I doubt you gain much from that. Then the risk is people stealing your code but is there really that much secret sauce in an autocomplete program? If people really care won't they just reverse-engineer it anyway? Aren't they even more likely to try and reverse-engineer it if it's the only way to get an open source version that they control?
Maybe I'm overthinking this.
Anyway, I hope I don't appear too negative, that's just my opinion. I'm happy to see people working on improving our code editing experience in any way or form, sometimes it feels like we're still in the stone age with our dumb ASCII files and relatively primitive tooling.
I'm also a big fan of emacs' dumb autocompletion, mainly dabbrev-expand. (Which hippie-expand uses.) I sometimes try other autocompletion methods, including those that use a proper cross-reference. But most of the time I just fall back to dabbrev-expand when I'm in the flow of typing. The main reason is predictability. It will reliably paste words and identifiers that are close above, so reliably that I usually don't slow down to check if it picked the right one.
And it works everywhere. It will also complete this long name I just typed in a markdown document into the filename when creating a new file, and into the class name after that. Yes, there are better methods (like templates) for many use cases if you bother to set them up. But it's amazing how far this single stupid tool already takes you.
TabNine seems to take this one step further. It's really exciting that this concept gets more mindshare. I'm not going to use it (license) but next time I think about upgrading my autocompletion I'll have a better idea into which direction to take it. I'm always toying with the idea of implementing my own.
As long as there is demand, he'd most probably maintain the project but it's not a disaster if he decides not to.
If there won't be any demand for this tool in a few years this could mean 2 things: Either people think it's not worth it (in this case, you don't lose anything by not using it) or there are better/cheaper alternatives (and you can use them)
> As long as there is demand, he'd most probably maintain the project but it's not a disaster if he decides not to.
There is a lot of reason for someone to stop maintaining a project event if there is demand, they get bored, they change job and don't have time anymore, they get a new hobby…
I tested it in Sublime Text, and it's a bit odd that i can tab after i first pressed the tab button and the autocomplete window disappeared, but i think i can work with that :)
It seems to work with a small Ruby app, but not with a big Ruby on Rails application. Is it because it's too large? How can I check for errors or index status?
I suppose it's that. I wonder why I don't see any error messages tough. I don't want to buy it if it won't work for my project. It's more than 200kb and 15mb but the .rb are way less.
[edit] Ok, it just took a bit before showing the tab competitions and the license message. Will be useful to know where the index process is at.
[edit2] Just bought a license. Keep up the good work!
I've had better luck with TernJS and deoplete in vim. I really like that this is much more responsive but it lacks the completion support that I have to expect with ternjs.
Looks like you can:
License keys may be used on multiple computers and operating systems, provided the license key holder is the primary user.https://tabnine.com/eula
I wonder how can I disable it for certain file types, for example in SCSS I don't want those, because I already have a good auto suggester that I am used to.
How does licensing work? Does this give me a file I put on my machine? I have my dotfiles checked into git, so I'd rather not commit the license publicly.
You get a license code that you paste into an editor with TabNine installed. The autocompletion engine sees it and completes the registration. Not sure where it gets saved ultimately.
I find it interesting that this quote will become less and less absurd as technology continues to improve.
The confusion stems from the fact that a human can tolerate a certain amount of "wrong" and still give the "right answer". For example you don't need to speak with perfect grammar to be understood. Humans won't choke on syntax errors the same way a browser chokes on malformed html.
Machines are much more rigid and can't understand context and intent. But this is starting to slowly change in the age of machine learning. For example if I make a small typo, I expect an autocompleter to still understand what I was trying to type. It wouldn't be too absurd to believe that in a not too distant future, it would also be able to autocomplete away common/obvious bugs. Maybe it can even autocomplete/rewrite code from near pseudocode if the intent is clear enough.
You can still evaluate it if your project is larger than 200KB. TabNine will choose files to index that are relevant to the files you are editing (determined by distance in the directory tree).
Agreed. This was a deal breaker for me. It would be really great if it somehow implemented a standard protocol (like https://langserver.org/) in order to integrate with existing completion plugins.
YouCompleteMe will be better than TabNine for API exploration.
TabNine will be more reliable (it will work correctly with malformed, ill-typed, or half-written code) and it can find patterns in your code, like you can see in the pictures on the website.
YouCompleteMe will know the specifics of the language you are in so it will be much better at simple syntax, this will be able to learn how you generally do things and repeated patterns, which usually also will get a good bit of the syntax down.
It's also a very competitive price. $29 is excellent for a piece of software that helps my day to day. It's really a sweet spot between very reasonable, and a bit pricey. I'm so happy about this project, and hope it works well (I'll be trying it tonight after work)
On that same note, I wish we were more willing to pay for our tools as a community. If we were, I think more neat and productive projects like this might exist. Yet, developers seem to be historically cheap, and our love for open source (which I do love) seems to be mixed in with our willingness to spend money on our tooling.
> Its paid features are always enabled when completing Rust code, in acknowledgment of the fact that TabNine could not exist without the Rust ecosystem.
Thanks for this, Jacob!
[0] - https://www.reddit.com/r/rust/comments/9uhc1x/tabnine_an_aut...