This option can be used to replace a sed or awk in the post-processing step.
ps -ef | fzf --multi --header-lines 1 | awk '{print $2}'
ps -ef | fzf --multi --header-lines 1 --accept-nth 2
This may not be a very "Unix-y" thing to do, so I've always felt that fzf
shouldn't have such an option, but I've finally changed my mind because:
* fzf can be configured with a custom delimiter that is a fixed string
or a regular expression.
* In such cases, you'd need to repeat the delimiter again in the
post-processing step.
* Also, tools like awk or sed may interpret a regular expression
differently, causing mismatches.
You can still use sed, cut, or awk if you prefer.
Close#3987Close#1323
Add String() methods to types, so they can be printed with %s. Change
some %s format specifiers to %v, when the default string representation
is good enough. In Go 1.10, `go test` triggers a parallel `go vet`. So
this also makes fzf pass `go test`.
Close#1236Close#1219
When --with-nth is used, fzf used to preprocess each line and store the
result as rune array, which was wasteful if the line only contains ascii
characters.
Make sure to consistently calculate tiebreak scores based on the
original line.
This change may not be preferable if you filter aligned tabular input on
a subset of columns using --nth. However, if we calculate length
tiebreak only on the matched components instead of the entire line, the
result can be very confusing when multiple --nth components are
specified, so let's keep it simple and consistent.
Close#926
In the best case (all ascii), this reduces the memory footprint by 60%
and the response time by 15% to 20%. In the worst case (every line has
non-ascii characters), 3 to 4% overhead is observed.
This change improves sort ordering for aligned tabular input.
Given the following input:
apple juice 100
apple pie 200
fzf --nth=2 will now prefer the one with pie. Before this change fzf
compared "juice " and "pie ", both of which have the same length.
I profiled fzf and it turned out that it was spending significant amount
of time repeatedly converting character arrays into Unicode codepoints.
This commit greatly improves search performance after the initial scan
by memoizing the converted results.
This commit also addresses the problem of unbounded memory usage of fzf.
fzf is a short-lived process that usually processes small input, so it
was implemented to cache the intermediate results very aggressively with
no notion of cache expiration/eviction. I still think a proper
implementation of caching scheme is definitely an overkill. Instead this
commit introduces limits to the maximum size (or minimum selectivity) of
the intermediate results that can be cached.