I made a script that uses atuin to get previous commands related to your current command line - latest commands ran in the same session, in the same directory, in other sessions, latest commands for the same executable etc. then feeds it into GPT and streams the replies to fzf so you can choose the best autocompletion (or it can fix problems in the line you've written already as well). On Wezterm and Kitty it can also get the terminal screen contents error messages and so. Because of the streaming reply the first autocomplete line is ready quite soon after the keyboard shortcut is pressed.
Have been putting off pushing it to Github, think I'm gonna do that today.
Well it's published. Wasn't ready to publish and didn't have time to clean it up but I'll get back to it in the coming days, there's also some extra features on the way but disabled currently. Works quite well for me though.
Forgot to add needs tiktoken, openai and fzf. If someone knows how to do that command line query and replace on other than fish/nushell, please let me know.
It's usually ~two seconds or so for the first line to come out of OpenAI. You can see the example video in the github, though I think that was little slow.
If you have big atuin file though, creating indexes for session and cwd are good ideas so we can get the request out quickly..
edit: but much faster than searching or asking LLM for command line parameters instead
Thanks! I'll do that after I've had more time to finish some features and clean the code up in few days. But I've released a crude version at https://github.com/TIAcode/LLMShellAutoComplete
Probably paranoia, but I’d be uncomfortable feeding it a bunch of domain and server names, along with any other more interesting params that might sneak in.
Does it incorporate the return code of the commands to get an approximate good/bad rating? I wonder what percentage of the CLI mistakes I make return zero anyway because it's a valid command that I simply misused.
It doesn't, I think it should but not sure how to add it to the prompt yet - I have a feeling that if I'll just add them after or before the command line, GPT will at least occasionally add hallucinated return codes to the autocompletion. Maybe I'll just add the unsuccessful codes or something, but it needs some testing.
For me one of the most common issues is when I write a regex but I have some of the escapes the wrong way around etc. So I’m giving a valid regex, but it’s not matching what I tried to match. And then I have to change it around a couple of times before it works.
One reason for this is that different programs have different rules for what should be escaped when, when you write a regex. For example, I think grep is a bit different from vim in this regard
Hm... Now that I think about it, regex tools on the command line haven't been a goto recently. I used sed, awk, perl -e, et. al. constantly from the late 90s until maybe 2015 but since then I'm more likely to pop open a ipython repl or whatever and avoid those weird inconsistencies altogether. Also, developing on the old scp->LAMP stack setup required more shell script glue than the more automated contemporary setups I've been using.
I'd probably suck at it. No matter how frequently I use regex in :ex commands, I always screw up the escapes, substitutions, etc.
It feeds the information from atuin database as a prompt for OpenAI, like: "Latest calls for the same executable:\ncmd1\ncmd2" (I should work on my prompts, doesn't actually look that optimal, oh well). Then at the end gives the current command line and asks for few options how to finish / replace the line, with some extra requests for GPT (like don't write anything else except the command line etc).
I have been using sqlite database since 2017 [1], has over 100,000 items in the database at this point. The database is almost 0.5GB, but I also use Full-Text-Search capabilities of sqlite. 2 years ago I have built a mac application [2], that syncs items via iCloud, and it only works on the macOS.
I would highly recommend anyone who spend a lot of time in terminal to improve their shell history by using atuin or similar tools, cannot tell how many times it actually helped me to find some important information there, about how I did one thing or another.
Ah, a fellow packrat! I have every command I ever typed into a shell since around 2005, and my history weighs in at 1 CD or 650MB (as of a couple of years ago)
I'm probably being wasteful of space because I store each session in a separate file. I used to do a lot of data analysis at the shell back in the day, and found it useful to audit sequences of commands afterwards for mistakes, or to turn them into scripts.
As a lot of people mentioned. This is FTS index. So it is definitely way more blown up. Plus I do save a lot of additional information with it: pwd, session id, shell used, exit codes, whole command obviously. And to support icloud, also additional information for icloud entity id. And now when you point out, 5k per entry is a lot of data. But I am on with that. This information really important for me.
Maybe I have some sort of disease, but while reading "find words out of order or support features like stemming" the regexs for that immediately flashed before my eyes, so I think "necessary" is a little strong there.
I don't think I said it was. I was addressing the specific use cases mentioned. If there's another use case you think is important in searching command line history, feel free to describe it.
Most stemming use cases are trivially solved with a regex. That's the point he was making. The difference between a beginner and expert with regexes is quite a lot.
Maybe! Full-text search is great for text. Command lines have some things in common with text, but they definitely aren't normal text. E.g., punctuation is much more significant. Stemming may not be appropriate. Case matters. Word boundaries are different, and many of the significant lumps aren't really words.
Sometimes it's nice to not manually write a regexp to find all of the variants of every word or deal with arbitrary ordering of substrings. And if you're using SQLite and fts5 is installed, why not just create a virtual full text search table with one command and use that? With a small enough corpus, it's a meaningless distinction to bikeshed about the implementation: the easiest solution to build is the best. 500MB of disk space for a pet project that gives you convenience is a terrifically small amount of storage. I have videos that I recorded on my phone that take up more than double that.
Wow so it looks like I'm in the minority in having shell command history disabled? I have a small number of commands (20) that go into the live history of login shells for convenience, but nothing gets saved when I log out.
If there's something I do repeatedly, I make an alias or a function for it.
People's history files are a great place to look for passwords and other secrets, mainly. I suppose that risk could be reduced by having the history file encrypted on disk, but I don't know of any shell that does that (can't honestly say I've really looked though).
I get a ton of value out of having my shell history available (both for search but also to try to reconstruct steps of what I did yesterday when my memory is hazy)
I guess you could set up an entropy scanner and flag history lines that have high entropy, but that might not be enough (low-entropy secrets) and might be bothersome (lots of false positives / things that are technically secrets but that you don't care if they're in your shell history).
The problem I see is that if I ssh into 5 different remote hosts in a day, most of the commands are not executed on my host and thus not part of the local (or shared, distributed) history.
I suppose this could be solved with either:
- Some kind of modified ssh that sends back the commands to my host
- Some kind of smart terminal that can analyze commands to build up the history
Any ideas on how to practically solve this problem?
same concern. I will probably use atuin local because it seems so cool. but the beauty of the shell is that it is by default so universal and portable and small, so I don't like the idea of dependencies for my use of it on remote machines. mentally I've gotten used to the idea that my local shell env is a very different beast than "normal " shell.
I've only used McFly and found it to be pretty great. My only complaint is the default search mode is SQL strings, so you have to use `%` for wildcards. I wish it was a more forgiving, less exact search.
No one ever addresses the most important problem: how to separate commands that one would like to retain (preferably indefinitely) vs garbage commands (like cd, ls, cat, etc.) that should better be wiped in a few days.
With bash's HISTIGNORE, I can consciously prefix my command with a space to prevent it being added to history.
ls I usually don't care about, but there are directories I regularly cd to, so it would be nice to have those in history.
I can think of a neat heuristic, which is that I often cd to an absolute or home directory, so if the path starts with / or ~ I'll possibly want to cd there again in the future. Changing to a relative path on the other hand, I tend to do more rarely and while doing more ephemeral work.
1. I don't always know beforehand if the command I am about to execute is garbage I'd like not to save.
2. I just don't want to be conscious about that every time I write a command. I'd rather edit history after I've finished some work. But that's just too tedious to do manually, I'd like to have some pre-configured heuristics applied automatically, like "never save cd/ls to history", but provide a way to overrule that rule in rare situations.
3. Absolute/partial/symlinked paths - are another separate problem :'(
While that’s certainly useful to cull trivial commands, it can also behoove the user to remove commands done wrong.
Coming back several months later to be gifted with several, very similar commands of which only one is right can be frustrating. The history records the failed tries as well as the successes.
Mind, the errors give a place to start, but if it’s far enough removed from the original event it may well be ambiguous enough to send you to a search engine anyway, especially if you have the memory of a goldfish like I do.
> Coming back several months later to be gifted with several, very similar commands of which only one is right can be frustrating. The history records the failed tries as well as the successes.
The correct command is almost always the last one, so as long as your search results are chronological this shouldn't be an issue?
This is also sub-optimal, as it causes another problem: some of the commands are part of a bigger sequence (the most import property here is that items inside sequences are ordered, the order of commands matters!), so by blindly deleting duplicates - you break sequences.
In SQL there are sessions and transactions. In shell history - we don't have such entities and this sucks. One could configure their bash/zsh to save history into separate files, but you can't teach them later to source them properly (retaining session awareness).
Oh, we don't actually delete anything - just deduplicate for search.
Sequential context is something we're building very soon. The idea: you search for "cd /project/dir", press TAB and it opens up a new pane in the tui. This will show the command +/- 10 commands. You can then navigate back in time.
This could indeed be useful for managing that one setup command you always have to run in this project dir but never remember the name of
> Oh, we don't actually delete anything - just deduplicate for search.
Good to hear, but the point stands: so you deduplicate only for the view, not in the source, and thus the source remains contaminated with duplicates (at very least they cost some disk space and increase seek time).
As for the view: so, since you deduplicate the commands - you can't lookup the context (commands executed before & after)! Because each time the now deduplicated command was executed in the past - it had its own context!
As the sqlite schema below indicates, each command has a unique id and a timestamp. Whether you want duplicates removed depends on what you want to know. It might be nice for the UI to expose a time context, which would retain duplicates. (Maybe it does! By coincidence, I just installed this yesterday, and I hardly know anything.)
CREATE TABLE history (
id text primary key,
timestamp integer not null,
duration integer not null,
exit integer not null,
command text not null,
cwd text not null,
session text not null,
hostname text not null, deleted_at integer,
unique(timestamp, cwd, command)
);
CREATE INDEX idx_history_timestamp on history(timestamp);
CREATE INDEX idx_history_command on history(command);
CREATE INDEX idx_history_command_timestamp on history(
command,
timestamp
);
Disk space is the least of the problems garbage entries in command history cause.
It's not just commands that are actual for the current session (like ls/cd/cat), it's also incorrect commands or just commands not worth retaining still being saved among with the useful commands.
The most precious is user's time. And when you fzf part of the command with the regularly saved history and would like to re-execute something important - you'll first get a long list that you'll first have to filter to find the command you were seeking.
So to counter your question with another: why store garbage?
A typical command I run is never going to be something I look up again, so I would prefer to optimize for writing instead of reading. Dumping every command to a file adds no friction to my regular work, while attempting to categorize garbage commands would add a lot of friction.
Also, when I do want to reference my deep history I often find that seeing the full list of what I was doing is helpful at getting myself back into the frame of mind I was in when I ran the commands originally, which can be more valuable than seeing exactly which commands I ran.
> A typical command I run is never going to be something I look up again, so I would prefer to optimize for writing instead of reading. Dumping every command to a file adds no friction to my regular work
this might be a way to improve the default nix toolset in a way worth globalizing, it seems like it has the same philosophy. Discovering missing tools... it's like discovering new primes :)
What's it like with many shells/panes in multiplexers open? I often find my history from one or another either lost or not available across different ones.
With atuin, it's available immediately on other local session (panes, windows, tabs etc). There's also remote sync, so after some configurable amount of time, it's even available on other devices.
Works great! To me the reliable instant history sync between tmux panes is one of the best features of atuin. I tried many things to get this working with vanilla bash and it always seemed flaky.
# append to .bash_history and reread after each command
export PROMPT_COMMAND="history -a;$PROMPT_COMMAND"
# append to .bash_history instead of overwriting
shopt -s histappend
Except that this doesn't save commands which haven't finished yet, and it never saves the current pending command when the shell is killed (e.g. the user closes the terminal window or logs out or the SSH connection is broken).
Sometimes the rare, long-running commands are the most valuable.
If I set up history software, it should preserve all history, and as early as possible.
It looks like it's not possible to export the history to a format the shell can import[0], so if I wanted to try this out, my commands would be locked there.
It looks interesting, so if I'm overlooking a way to do this with fish, I'd experiment with it.
You can use sqlite[0] to export the database, or if you want a ui, use datasette[1]. On my mac, the database is stored at ~/.local/share/atuin/history.db
If you just want to experiement with it for say a couple of weeks, does it really matter that the commands from those few weeks disappear if you decide not to continue using it?
Presumably you already have a long history of commands from past years in the original format. And if your command line usage is similar to mine, then most of the commands you will use in the future will be covered by your existing history.
So if a few weeks of experimenting with atuin ends with you deciding not to use atuin, then probably you will be fine going back to the old history files that do not include those weeks of activity.
I'm not the parent commenter, but the friction point for me is the slowness when typing the first few characters in an interactive search (I have a large history). I think the searches are synchronous with each keystroke, right? It would feel a lot faster if each keystroke could cancel an in progress search instead of waiting for it to finish.
There is a noticeable delay experienced from when I press the up arrow and the list appears. This was the largest friction point. Then, once the list appears, it needs to be filtered to reach a desired pattern. Narrowing down the selection and experiencing real-time shifting in results distracts from the goal of history finding. I found these hurdles lead the results that still were not as accurate as simply ripgrepping against zsh_history.
I always wondered why Linux devs didn’t embrace SQLite wholesale for every single tools. Except maybe the kernel because I don’t know anything about the kernel.
Is there a way to highlight matches? Also, does someone know how to change the date format to full? Instead of „5 days ago“. Finally, an observation: I cannot use my Emacs keybindings to kill the line or a word (backwards), when I am in search.
Excellent! I had been thinking of building something along the same lines when I switched from `zsh` to `fish`, as I had been missing `zsh_stats`. Now I don't have to and can focus on my other side project!
Since we're setting up a database, has anyone seen a project to store stdout/stderr in tables too?
Seems more useful in the GPT age, when it could provide an opportunity for conversations about the whole task you worked on. (As well as finding commands based on their output.)
Why would you want syncing? Different machines have different file layouts and different installed executables. Any common commands are either coincidental or part of some fleet operation better managed through an actual remote management system.
I stopped using Atuin because you need one additional click to go up the history, and the configuration didn't provide an easy way to change that behavior.
Edit: apparently, they made it possible to disable that behavior and the documentation is much better
No, you can't! Thanks to some bizarre escaping that happens when ZSH (and BASH, I think) dumps commands to the history file, any command with non-latin1 characters will break here and won't be read, moreover - silently! The other possibility is that you'll import wrong characters.
I decided to implement this using bash. I have a working prototype running on one machine. If anyone wants to help me refine it, let me know and I'll post what I've done on GitHub.
For people who have tried this, how good is the latency from when I press ctrl-r [first letter] to when I start seeing results? If it's not instant it's just going to frustrate me.
I notice 0 latency (except when using the opt-in skim feature - we haven't properly optimised that yet). Latency is very important for us too. Granted, if you don't use an SSD, then you might encounter some startup lag
I think it actually works on Windows/powershell for now. We can't guarantee it because we can't test it, but we have a windows user who is always submitting fixes for windows
I tried it out and I couldn't really figure a way to `atuin init` on Powershell, as it requires one of the supported shells as a parameter.
I've also seen a couple of PRs such as [1] that consider Windows support a dead-end as of 2021, so I assumed it to be a dead-end as well. Maybe this user is on WSL? That'd seemingly work as it can get bash/zsh or anything running there.
cool tool. i think improvement in this area for non-shell-specific solutions is always good.
one thing i haven't seen yet (correct me if i'm wrong...) is an easy way to get all this stuff to magically appear on a new machine you've ssh'd into for the first time. i've hacked up my own in the past but that's got issues with tunneling and multi-hops. anyone know a solution to this? maybe a feature request?
We've heard some users using hard drives or networked FS and having performance issues. sqlite relies on mmap and random access of pages, which can suffer on higher latency drives
- having a distributed option (McFly does not have one)
1. Seeking co-maintainers: I don't have much time to maintain this project these days. If someone would like to jump in and become a co-maintainer, it would be appreciated!
Second, I am in awe of how good your documentation is and how well you communicate about atuin to the world at large.
Does Atuin offer any features to toggle the capture of commands into its DB? Being able to opt-in or opt-out of Atuin history on a per-command basis would be pretty useful, especially because there is also the atuin sync feature.
I usually work with sensitive information inside a tmux session because, in the default bash configuration, most commands run in tmux never make it into bash history (I believe the last pane to exit is the only one that does make it). It seems I would have to manually go in and drop rows from the DB if I set up Atuin.
One of my products, bugout, has a command called "bugout trap". Not trying to push bugout here, but thought Atuin might benefit from some of the lessons we learned:
1. Because bugout trap is opt-in (you have to explicitly prefix your command with "bugout trap --", it also allows users to specify tags to make classifying commands easy. This is really useful for search - e.g. you can use queries like "!#exit:0 #db #migration #prod" to find all unsuccessful database migrations you attempted in your production environment.
2. bugout trap has a --env flag which gives users the option of pushing their environment variables into their history. This is really useful for programs that use a lot of environment variables. The safest way to use this is to first trap commands into your personal knowledge base with --env, then remove or redact any sensitive information, and only then share (in case you want to share with a team).
3. We thought that sharing would be useful for teams to build documentation on top of. Even we ourselves have very little adoption of that use case internally. We use it to keep a record of programs we run in our production environment (especially database migrations).
4. bugout trap also stores data that a program returns from stdout and stderr - this has been INCREDIBLY useful. I do want to add a mode that makes the capture of output optional, though, as currently bugout trap is unusable with things like server start commands which run continuously.
5. In general, I have found that command line history is very personal and private for developers so collaborative features are going to rightly be seen with skepticism.
Hope that helps anyone building similar tools.
$ bugout trap --help
Wraps a command, waits for it to complete, and then adds the result to a Bugout journal.
Specify the wrapped command using "--" followed by the command:
bugout trap [flags] -- <command>
Usage:
bugout trap [flags]
Flags:
-e, --env Set this flag to dump the values of your current environment variables
-h, --help help for trap
-j, --journal string ID of journal
--tags strings Tags to apply to the new entry (as a comma-separated list of strings)
-T, --title string Title of new entry
-t, --token string Bugout access token to use for the request
What kind of cost? Obviously, backups on the author's server are not something I'd ever do, but the "offline only mode" is there, apparently.
TBH I was thinking about doing this for a while now. History of my shell, which now is 3.8MB in size, is one of my competitive advantages as a developer (and a very nice thing to have as a power user). It accumulated steadily since ~2005, and with fzf searching the history is much faster than having to google, as long as I did what I want to do now even once in the distant past. I even wrote a utility to merge history files across multiple hosts[1], so I don't have to think "where did I last use that command", as I have everything on every host. The problem with this, however, is shell startup time. It started being noticeable a few years ago and is slowly getting irritating. The idea of converting the histfile into sqlite db crossed my mind more than once due to this.
This is interesting, thanks I'm checking it out. I was thinking this could be a flash in the pan thing that would then be annoying to maintain, but obviously global history of everything you've done is definitely a super boon to productivity. How do you handle maintaining this as you transition through jobs, machines, etc? 3.8MB is obviouslly trivially small in size so you could store it on a potato, but like what is your workflow around maintaining these one off ad hoc "developer boost" type tools?
> How do you handle maintaining this as you transition through jobs, machines, etc?
Currently, the tool reads ~/.mergerc, which is a JSON file with a list of SSH hosts to SCP history to and from. As long as the history file is in the same place (it tends to be on hosts that I setup, and otherwise I check in default locations) and the host has an entry in ~/.ssh/config, the tool will work. It's really just a wrapper for a few SCP invocations plus a history file (extended) format parser.
Changing servers is just a change in the config file, but it's also helpful for changing jobs, because I can quickly add a bit of filtering before the merging happens. I had to erase some API keys and such a few times, adding `filter` call here: https://github.com/piotrklibert/zsh-merge-hist/blob/master/s... took care of it.
> what is your workflow around maintaining these one off ad hoc "developer boost" type tools?
Good question. I don't have such workflow, at all. When I commit to write something like this, I try to make sure that it has a scope limited enough so that it can be "completed" or "done". In this case, the tool builds on SSH/SCP and a file format that hasn't changed in the last 20 years (at least). So, once I had it working, there was nothing much to do with it after that. The only change I had to do recently was changing `+` to `*` in the parser, because somehow (not sure how, actually) an empty command made it into the file. But that's all I had to do in 5 years time.
I'm not as extreme, but suckless.org philosophy appears to work well here. Here's another example: https://github.com/piotrklibert/nimlock - it's a port, done because I wanted to do something in Nim, but it worked for me for years and I suspect it still works now (after going full remote I stopped needing it). There's nothing much that could break (well, Wayland would break it, but I don't use it), and so there's not much you need to do in terms of maintenance.
As for language choices - these are basically random. I made the zsh-merge-hist in Scala simply because I was interested in Scala back then. I have little tools written in Nim, OCaml, Racket, Elisp, Raku - and even AWK (pretty nice language actually) and shell. That's another reason why making the tools any more complex than what's absolutely necessary would be a problem: the churn in the ecosystems tends to be too high for me to keep track of, especially since I'd need to track 10 of them.
EDIT: I forgot, but obviously the most important "trick" is not giving a shit if these things work for anyone else but me :D
> I'm checking it out
If you have Java installed, `./gradlew installDist` should give you `./build/install/bin/zsh-merge-hist` executable to run. The ~/.mergerc (on the host the tool runs) should look like this:
One such cost could be database size. I currently have 45k history entries in my database and it sits at roughly 15MB in size due to database indices along with the additional data we store
But it's a cost shared by all open shells, right? Well, even if it was 15MB per shell session, it should still be worth it if the startup and searching is faster.
For comparison: I use extended ZSH history format, which records a timestamp and duration of the call (and nothing else), and I have ~65k entries there, with history file size, as mentioned, 3.8MB. It could be an order of magnitude larger and I still wouldn't care, as long as it loads faster than it takes ZSH to parse its history file.
There's currently no paid option. Obviously one should assume such a free service can't be sustainable so you'd be correct to think we should have one.
Currently we rely on Github sponsors as well as our own additional funding
With the server, is it primarily a gateway to the hosted SQLite databases?
eg receives incoming shell history to store in the backend, and maybe do some searches/retrievals of shell history to pass back? eg for shell completion, etc
If that's the case, then I'm wondering if it could work in with online data stores (eg https://api.dbhub.io <-- my project) that do remote SQLite storage, remote querying, etc.
We currently use postgres. The server is very dumb, verifies user authentication and allows paging through the encrypted entries.
There's a PoC that allows it to work with SQLite too for single user setups - and we are thinking of switching to a distributed object store for our public server since we don't need any relational behaviour.
Interesting. Yeah, we use PostgreSQL as our main data store too. The SQLite databases are just objects that get stored / opened / updated / queried (etc) as their own individual things. :)
One of our developer members mentioned they're learning Rust. I'll point them at your project and see if they want to have a go at trying to integrate stuff.
At the very least, it might result in a Rust based library for our API getting created. :D
> we are thinking of switching to a distributed object store for our public server
As a data point, we're using Minio (https://github.com/minio/minio) for the object store on our backend. It's been very reliable, though we're not at a point where we're pushing things very hard. :)
Can you tell me if my understanding of this issue is correct?
Let's say I run a command where I've pasted in a credential from my password manager: ` some-cli login username my-secret-password` (note space at beginning)
Normally this would prevent the command from getting saved in any meaningful way in my bash history, so that if I later run a malicious script, it can't collect secrets from my bash history.
With the bug here, it sounds like atuin would prevent that entry from being stored in the sqlite store, but it would still be in my shell history?
If so, this is really significant, and would stop me from using Atuin. Not letting users know about this behaviour is incredibly negligent, and honestly erodes my trust in Atuin to consider user security in general.
it's not serious for most people I guess, but if you rely on bash's HISTIGNORE and don't disable bash's built-in history mechanism when you adopt Atuin, then this is as serious as you are paranoid
Yeah you could totally sync your shell history if you’re using a NFS share or something, but that’s going to affect way more than just your .bash_history
Well, all the client side code is open source and compilable, and all history is fully encrypted before being uploaded.
So even if we were being controlled, you still can be confident that we can't do anything with your data - all we can see is how active you are, that is until someone finds a way to quickly break xsalsa20poly1305.
It's always fair to be critical of these things. However the energy we spend on this is our concern.
At the end of the day, Ellie and I work on this because these features actually improve our workflows. The directory search feature is probably my favourite, and the sync feature is the key feature Ellie wanted to begin with.
On the getting old part, there's definitely a point where someone has enough bagage thay most additional tools are solving a problem they already worked around or solved a different way. For someone new to the field, this tool is on the same footing as the rest and could fit them better.
As an aside, the part I like the most about our field is the ability for a single or two devs to build themselves the tools they exactly need, and potentially share it to the community with low friction.
You must be getting old because you're unable to see the irony in spending the time and energy to make a comment decrying the quasi-usefulness of how others choose to spend their energy.
i am not a new kid on the block either but I was looking for such a tool for a very long time. I think distributed shell history across all of my servers is a big win.
Have you considered that you might not be the target audience? Atuin’s ability to sync across computers while being able to separate the context the command was used in has been incredibly useful to my team.
> Atuin replaces your existing shell history with a SQLite database
How can that work. Nobody is going to pay any ransom for just a shell history, and there are ways to get it out of a SQLite database. Wouldn't it be simpler just to encrypt the original .bash_history?
I think the advantage of the sqlite database is that you retain more context for any given command (e.g. what the current working directory was, ...) in a structured way (it is a database after all).
That stored context can then be used to query the database (e.g. filter the history to only show commands that were executed in the cwd).
These queries are the point of using sqlite, not anything security as far as I can tell.
Have been putting off pushing it to Github, think I'm gonna do that today.