Compare commits

...

36 Commits

Author SHA1 Message Date
Junegunn Choi
7fa5e6c861 0.15.1 2016-09-21 01:28:24 +09:00
Junegunn Choi
00f96aae76 Avoid rendering delay when displaying extremely long lines
Related #666
2016-09-21 01:23:41 +09:00
Junegunn Choi
a749e6bd16 Fix temp directory in a test case 2016-09-21 01:15:35 +09:00
Junegunn Choi
791076d366 Fix panic when pattern occurs after 2^15-th column
Fix #666
2016-09-21 01:15:06 +09:00
Junegunn Choi
37f43fbb35 Add --print0 option
Related: #660
2016-09-19 01:15:38 +09:00
Junegunn Choi
401a5fd5ff Printable character in --expect set should not affect --print-query 2016-09-18 14:34:50 +09:00
Junegunn Choi
1854922f0c Truncate the query string if it's too long
Use hard-coded limit to keep it simple. An alternative is to dynamically
calculate the width of the visible area and use it as the limit, but it
can cause unwanted truncation of the query on screen resize/split.
2016-09-18 14:34:48 +09:00
Junegunn Choi
2fc7c18747 Revise ranking algorithm 2016-09-18 14:34:46 +09:00
Junegunn Choi
8ef2420677 Update README 2016-09-13 04:12:03 +09:00
Junegunn Choi
cf6f4d74c4 Merge pull request #657 from ishanray/patch-1
Fix typo in comment
2016-09-11 12:13:40 +09:00
ishanray
f44d40f6b4 Update algo.go 2016-09-10 23:40:55 +04:00
Junegunn Choi
1c81a58127 Merge pull request #654 from qiemem/fix-tmux-groups-dont-break-sockets
[fzf-tmux] Make fzf target correct session in group
2016-09-07 21:36:32 +09:00
Bryan Head
9baf7c4874 Make fzf target correct session in group
Fixes #643
Doesn't break #648
2016-09-06 13:03:07 -05:00
Junegunn Choi
22b089e47e Revert "Unset TMUX before splitting window" (#648)
This reverts commit 4d4447779f.
2016-08-31 14:20:29 +09:00
Junegunn Choi
b166f18220 Merge pull request #646 from qiemem/fix-tmux-groups
[fzf-tmux] Fix grouped tmux session confusion
2016-08-29 12:47:43 +09:00
Junegunn Choi
68600f6ecf Merge pull request #645 from ckafi/split-without-IFS
[zsh-completion] Split default zsh binding at the correct place
2016-08-29 12:47:14 +09:00
Bryan Head
4d4447779f Unset TMUX before splitting window
Avoids confusing grouped sessions.
Fixes #643
2016-08-28 16:57:38 -05:00
Tobias Frilling
639de4c27b Split default zsh binding at the correct place
The command substitution and following word splitting to determine the default
zle widget for ^I formerly only works if the IFS parameter contains a space. Now
it specifically splits at spaces, regardless of IFS.
2016-08-28 20:34:36 +02:00
Junegunn Choi
d87390934e [neovim] Do not resize if the size of the screen has changed
Related #642
2016-08-28 19:27:18 +09:00
Junegunn Choi
411ec2e557 Merge branch 'joshuarubin-master' 2016-08-28 19:18:13 +09:00
Joshua Rubin
f025602841 [vim] Reset window sizes on close
Fix #520
Fix junegunn/fzf.vim#42
2016-08-28 19:17:24 +09:00
Junegunn Choi
f958c9daf5 [vim] Tilde prefix is not allowed for left or right layout 2016-08-24 01:15:35 +09:00
Junegunn Choi
b86838c2b0 0.13.5 2016-08-21 05:02:45 +09:00
Junegunn Choi
1f7d1f9b15 Update Centos Dockerfile to use Go 1.7 2016-08-21 04:54:53 +09:00
Junegunn Choi
f8fdf9618a No need to cache the result in filtering mode (--filter) 2016-08-20 02:06:57 +09:00
Junegunn Choi
827a83efbc Remove Offset slice from Result struct 2016-08-20 01:53:32 +09:00
Junegunn Choi
3e88849386 [vim] Fix "E706: Variable type mismatch for: arg" 2016-08-19 18:02:32 +09:00
Junegunn Choi
608c416207 Add missing sources 2016-08-19 03:27:42 +09:00
Junegunn Choi
62f6ff9d6c [vim] Make arguments to fzf#wrap() optional
fzf#wrap([name string,] [opts dict,] [fullscreen boolean])
2016-08-19 03:05:22 +09:00
Junegunn Choi
37dc273148 Micro-optimizations
- Make structs smaller
- Introduce Result struct and use it to represent matched items instead of
  reusing Item struct for that purpose
- Avoid unnecessary memory allocation
- Avoid growing slice from the initial capacity
- Code cleanup
2016-08-19 02:39:32 +09:00
Junegunn Choi
f7f01d109e Set the upper limit of the number of search go routines 2016-08-19 01:55:38 +09:00
Junegunn Choi
01ee335521 Remove duplicate code 2016-08-18 03:11:54 +09:00
Junegunn Choi
0e0de29b87 Inline function calls in tight loops
By only using leaf functions
2016-08-18 01:48:52 +09:00
Junegunn Choi
babf877fd6 Increase the number of go routines for search
Sort performance increases as the size of each sublist decreases (n in
nlog(n) decreases). Merger is then responsible for merging the sorted
lists in order, and since in most cases we are only interesed in the
matches in the first page on the screen so the overhead in the process
is negligible.
2016-08-18 01:46:05 +09:00
Junegunn Choi
935272824e Setting GOMAXPROCS is no longer needed
https://golang.org/doc/go1.5
2016-08-17 02:21:33 +09:00
Junegunn Choi
3a9532c8fd Increase read buffer size to 64KB 2016-08-16 02:06:15 +09:00
37 changed files with 1644 additions and 1070 deletions

View File

@@ -1,6 +1,25 @@
CHANGELOG CHANGELOG
========= =========
0.15.1
------
- Fixed panic when the pattern occurs after 2^15-th column
- Fixed rendering delay when displaying extremely long lines
0.15.0
------
- Improved fuzzy search algorithm
- Added `--algo=[v1|v2]` option so one can still choose the old algorithm
which values the search performance over the quality of the result
- Advanced scoring criteria
- `--read0` to read input delimited by ASCII NUL character
- `--print0` to print output delimited by ASCII NUL character
0.13.5
------
- Memory and performance optimization
- Up to 2x performance with half the amount of memory
0.13.4 0.13.4
------ ------
- Performance optimization - Performance optimization

View File

@@ -10,18 +10,15 @@ Pros
- No dependencies - No dependencies
- Blazingly fast - Blazingly fast
- e.g. `locate / | fzf`
- Flexible layout
- Runs in fullscreen or in horizontal/vertical split using tmux
- The most comprehensive feature set - The most comprehensive feature set
- Try `fzf --help` and be surprised - Flexible layout using tmux panes
- Batteries included - Batteries included
- Vim/Neovim plugin, key bindings and fuzzy auto-completion - Vim/Neovim plugin, key bindings and fuzzy auto-completion
Installation Installation
------------ ------------
fzf project consists of the following: fzf project consists of the following components:
- `fzf` executable - `fzf` executable
- `fzf-tmux` script for launching fzf in a tmux pane - `fzf-tmux` script for launching fzf in a tmux pane
@@ -30,12 +27,12 @@ fzf project consists of the following:
- Fuzzy auto-completion (bash, zsh) - Fuzzy auto-completion (bash, zsh)
- Vim/Neovim plugin - Vim/Neovim plugin
You can [download fzf executable][bin] alone, but it's recommended that you You can [download fzf executable][bin] alone if you don't need the extra
install the extra stuff using the attached install script. stuff.
[bin]: https://github.com/junegunn/fzf-bin/releases [bin]: https://github.com/junegunn/fzf-bin/releases
#### Using git (recommended) ### Using git
Clone this repository and run Clone this repository and run
[install](https://github.com/junegunn/fzf/blob/master/install) script. [install](https://github.com/junegunn/fzf/blob/master/install) script.
@@ -45,7 +42,7 @@ git clone --depth 1 https://github.com/junegunn/fzf.git ~/.fzf
~/.fzf/install ~/.fzf/install
``` ```
#### Using Homebrew ### Using Homebrew
On OS X, you can use [Homebrew](http://brew.sh/) to install fzf. On OS X, you can use [Homebrew](http://brew.sh/) to install fzf.
@@ -56,26 +53,30 @@ brew install fzf
/usr/local/opt/fzf/install /usr/local/opt/fzf/install
``` ```
#### Install as Vim plugin ### Vim plugin
Once you have cloned the repository, add the following line to your .vimrc. You can manually add the directory to `&runtimepath` as follows,
```vim ```vim
" If installed using git
set rtp+=~/.fzf set rtp+=~/.fzf
" If installed using Homebrew
set rtp+=/usr/local/opt/fzf
``` ```
Or you can have [vim-plug](https://github.com/junegunn/vim-plug) manage fzf But it's recommended that you use a plugin manager like
(recommended): [vim-plug](https://github.com/junegunn/vim-plug).
```vim ```vim
Plug 'junegunn/fzf', { 'dir': '~/.fzf', 'do': './install --all' } Plug 'junegunn/fzf', { 'dir': '~/.fzf', 'do': './install --all' }
``` ```
#### Upgrading fzf ### Upgrading fzf
fzf is being actively developed and you might want to upgrade it once in a fzf is being actively developed and you might want to upgrade it once in a
while. Please follow the instruction below depending on the installation while. Please follow the instruction below depending on the installation
method. method used.
- git: `cd ~/.fzf && git pull && ./install` - git: `cd ~/.fzf && git pull && ./install`
- brew: `brew update; brew reinstall fzf` - brew: `brew update; brew reinstall fzf`
@@ -344,7 +345,7 @@ page](https://github.com/junegunn/fzf/wiki/Examples-(vim)).
#### `fzf#wrap` #### `fzf#wrap`
`fzf#wrap(name string, [opts dict, [fullscreen boolean]])` is a helper `fzf#wrap([name string,] [opts dict,] [fullscreen boolean])` is a helper
function that decorates the options dictionary so that it understands function that decorates the options dictionary so that it understands
`g:fzf_layout`, `g:fzf_action`, and `g:fzf_history_dir` like `:FZF`. `g:fzf_layout`, `g:fzf_action`, and `g:fzf_history_dir` like `:FZF`.
@@ -390,6 +391,12 @@ fzf
export FZF_CTRL_T_COMMAND="$FZF_DEFAULT_COMMAND" export FZF_CTRL_T_COMMAND="$FZF_DEFAULT_COMMAND"
``` ```
If you don't want to exclude hidden files, use the following command:
```sh
export FZF_DEFAULT_COMMAND='ag --hidden --ignore .git -g ""'
```
#### `git ls-tree` for fast traversal #### `git ls-tree` for fast traversal
If you're running fzf in a large git repository, `git ls-tree` can boost up the If you're running fzf in a large git repository, `git ls-tree` can boost up the

View File

@@ -161,14 +161,14 @@ done
if [[ -n "$term" ]] || [[ -t 0 ]]; then if [[ -n "$term" ]] || [[ -t 0 ]]; then
cat <<< "\"$fzf\" $opts > $fifo2; echo \$? > $fifo3 $close" > $argsf cat <<< "\"$fzf\" $opts > $fifo2; echo \$? > $fifo3 $close" > $argsf
tmux set-window-option synchronize-panes off \;\ TMUX=$(echo $TMUX | cut -d , -f 1,2) tmux set-window-option synchronize-panes off \;\
set-window-option remain-on-exit off \;\ set-window-option remain-on-exit off \;\
split-window $opt "cd $(printf %q "$PWD");$envs bash $argsf" $swap \ split-window $opt "cd $(printf %q "$PWD");$envs bash $argsf" $swap \
> /dev/null 2>&1 > /dev/null 2>&1
else else
mkfifo $fifo1 mkfifo $fifo1
cat <<< "\"$fzf\" $opts < $fifo1 > $fifo2; echo \$? > $fifo3 $close" > $argsf cat <<< "\"$fzf\" $opts < $fifo1 > $fifo2; echo \$? > $fifo3 $close" > $argsf
tmux set-window-option synchronize-panes off \;\ TMUX=$(echo $TMUX | cut -d , -f 1,2) tmux set-window-option synchronize-panes off \;\
set-window-option remain-on-exit off \;\ set-window-option remain-on-exit off \;\
split-window $opt "$envs bash $argsf" $swap \ split-window $opt "$envs bash $argsf" $swap \
> /dev/null 2>&1 > /dev/null 2>&1

View File

@@ -2,8 +2,8 @@
set -u set -u
[[ "$@" =~ --pre ]] && version=0.13.4 pre=1 || [[ "$@" =~ --pre ]] && version=0.15.1 pre=1 ||
version=0.13.4 pre=0 version=0.15.1 pre=0
auto_completion= auto_completion=
key_bindings= key_bindings=

View File

@@ -21,7 +21,7 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. THE SOFTWARE.
.. ..
.TH fzf-tmux 1 "Aug 2016" "fzf 0.13.4" "fzf-tmux - open fzf in tmux split pane" .TH fzf-tmux 1 "Sep 2016" "fzf 0.15.1" "fzf-tmux - open fzf in tmux split pane"
.SH NAME .SH NAME
fzf-tmux - open fzf in tmux split pane fzf-tmux - open fzf in tmux split pane

View File

@@ -21,7 +21,7 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. THE SOFTWARE.
.. ..
.TH fzf 1 "Aug 2016" "fzf 0.13.4" "fzf - a command-line fuzzy finder" .TH fzf 1 "Sep 2016" "fzf 0.15.1" "fzf - a command-line fuzzy finder"
.SH NAME .SH NAME
fzf - a command-line fuzzy finder fzf - a command-line fuzzy finder
@@ -47,6 +47,16 @@ Case-insensitive match (default: smart-case match)
.TP .TP
.B "+i" .B "+i"
Case-sensitive match Case-sensitive match
.TP
.BI "--algo=" TYPE
Fuzzy matching algorithm (default: v2)
.br
.BR v2 " Optimal scoring algorithm (quality)"
.br
.BR v1 " Faster but not guaranteed to find the optimal result (performance)"
.br
.TP .TP
.BI "-n, --nth=" "N[,..]" .BI "-n, --nth=" "N[,..]"
Comma-separated list of field index expressions for limiting search scope. Comma-separated list of field index expressions for limiting search scope.
@@ -275,6 +285,12 @@ with the default enter key.
e.g. \fBfzf --expect=ctrl-v,ctrl-t,alt-s,f1,f2,~,@\fR e.g. \fBfzf --expect=ctrl-v,ctrl-t,alt-s,f1,f2,~,@\fR
.RE .RE
.TP .TP
.B "--read0"
Read input delimited by ASCII NUL character instead of newline character
.TP
.B "--print0"
Print output delimited by ASCII NUL character instead of newline character
.TP
.B "--sync" .B "--sync"
Synchronous search for multi-staged filtering. If specified, fzf will launch Synchronous search for multi-staged filtering. If specified, fzf will launch
ncurses finder only after the input stream is complete. ncurses finder only after the input stream is complete.

View File

@@ -154,13 +154,21 @@ function! s:common_sink(action, lines) abort
endtry endtry
endfunction endfunction
" name string, [opts dict, [fullscreen boolean]] " [name string,] [opts dict,] [fullscreen boolean]
function! fzf#wrap(name, ...) function! fzf#wrap(...)
if type(a:name) != type('') let args = ['', {}, 0]
throw 'invalid name type: string expected' let expects = map(copy(args), 'type(v:val)')
endif let tidx = 0
let opts = copy(get(a:000, 0, {})) for arg in copy(a:000)
let bang = get(a:000, 1, 0) let tidx = index(expects, type(arg), tidx)
if tidx < 0
throw 'invalid arguments (expected: [name string] [opts dict] [fullscreen boolean])'
endif
let args[tidx] = arg
let tidx += 1
unlet arg
endfor
let [name, opts, bang] = args
" Layout: g:fzf_layout (and deprecated g:fzf_height) " Layout: g:fzf_layout (and deprecated g:fzf_height)
if bang if bang
@@ -179,12 +187,12 @@ function! fzf#wrap(name, ...)
" History: g:fzf_history_dir " History: g:fzf_history_dir
let opts.options = get(opts, 'options', '') let opts.options = get(opts, 'options', '')
if len(get(g:, 'fzf_history_dir', '')) if len(name) && len(get(g:, 'fzf_history_dir', ''))
let dir = expand(g:fzf_history_dir) let dir = expand(g:fzf_history_dir)
if !isdirectory(dir) if !isdirectory(dir)
call mkdir(dir, 'p') call mkdir(dir, 'p')
endif endif
let opts.options = join(['--history', s:escape(dir.'/'.a:name), opts.options]) let opts.options = join(['--history', s:escape(dir.'/'.name), opts.options])
endif endif
" Action: g:fzf_action " Action: g:fzf_action
@@ -268,10 +276,10 @@ function! s:fzf_tmux(dict)
if s:present(a:dict, o) if s:present(a:dict, o)
let spec = a:dict[o] let spec = a:dict[o]
if (o == 'up' || o == 'down') && spec[0] == '~' if (o == 'up' || o == 'down') && spec[0] == '~'
let size = '-'.o[0].s:calc_size(&lines, spec[1:], a:dict) let size = '-'.o[0].s:calc_size(&lines, spec, a:dict)
else else
" Legacy boolean option " Legacy boolean option
let size = '-'.o[0].(spec == 1 ? '' : spec) let size = '-'.o[0].(spec == 1 ? '' : substitute(spec, '^\~', '', ''))
endif endif
break break
endif endif
@@ -367,10 +375,11 @@ function! s:execute_tmux(dict, command, temps) abort
endfunction endfunction
function! s:calc_size(max, val, dict) function! s:calc_size(max, val, dict)
if a:val =~ '%$' let val = substitute(a:val, '^\~', '', '')
let size = a:max * str2nr(a:val[:-2]) / 100 if val =~ '%$'
let size = a:max * str2nr(val[:-2]) / 100
else else
let size = min([a:max, str2nr(a:val)]) let size = min([a:max, str2nr(val)])
endif endif
let srcsz = -1 let srcsz = -1
@@ -401,7 +410,7 @@ function! s:split(dict)
if !empty(val) if !empty(val)
let [cmd, resz, max] = triple let [cmd, resz, max] = triple
if (dir == 'up' || dir == 'down') && val[0] == '~' if (dir == 'up' || dir == 'down') && val[0] == '~'
let sz = s:calc_size(max, val[1:], a:dict) let sz = s:calc_size(max, val, a:dict)
else else
let sz = s:calc_size(max, val, {}) let sz = s:calc_size(max, val, {})
endif endif
@@ -422,9 +431,11 @@ function! s:split(dict)
endfunction endfunction
function! s:execute_term(dict, command, temps) abort function! s:execute_term(dict, command, temps) abort
let winrest = winrestcmd()
let [ppos, winopts] = s:split(a:dict) let [ppos, winopts] = s:split(a:dict)
let fzf = { 'buf': bufnr('%'), 'ppos': ppos, 'dict': a:dict, 'temps': a:temps, let fzf = { 'buf': bufnr('%'), 'ppos': ppos, 'dict': a:dict, 'temps': a:temps,
\ 'winopts': winopts, 'command': a:command } \ 'winopts': winopts, 'winrest': winrest, 'lines': &lines,
\ 'columns': &columns, 'command': a:command }
function! fzf.switch_back(inplace) function! fzf.switch_back(inplace)
if a:inplace && bufnr('') == self.buf if a:inplace && bufnr('') == self.buf
" FIXME: Can't re-enter normal mode from terminal mode " FIXME: Can't re-enter normal mode from terminal mode
@@ -456,6 +467,10 @@ function! s:execute_term(dict, command, temps) abort
execute 'bd!' self.buf execute 'bd!' self.buf
endif endif
if &lines == self.lines && &columns == self.columns && s:getpos() == self.ppos
execute self.winrest
endif
if !s:exit_handler(a:code, self.command, 1) if !s:exit_handler(a:code, self.command, 1)
return return
endif endif

View File

@@ -186,7 +186,7 @@ fzf-completion() {
[ -z "$fzf_default_completion" ] && { [ -z "$fzf_default_completion" ] && {
binding=$(bindkey '^I') binding=$(bindkey '^I')
[[ $binding =~ 'undefined-key' ]] || fzf_default_completion=$binding[(w)2] [[ $binding =~ 'undefined-key' ]] || fzf_default_completion=$binding[(s: :w)2]
unset binding unset binding
} }

View File

@@ -11,18 +11,18 @@ RUN cd / && curl \
https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz | \ https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz | \
tar -xz && mv go go1.4 tar -xz && mv go go1.4
# Install Go 1.5 # Install Go 1.7
RUN cd / && curl \ RUN cd / && curl \
https://storage.googleapis.com/golang/go1.5.3.linux-amd64.tar.gz | \ https://storage.googleapis.com/golang/go1.7.linux-amd64.tar.gz | \
tar -xz && mv go go1.5 tar -xz && mv go go1.7
# Install RPMs for building static 32-bit binary # Install RPMs for building static 32-bit binary
RUN curl ftp://ftp.pbone.net/mirror/ftp.centos.org/6.7/os/i386/Packages/ncurses-static-5.7-4.20090207.el6.i686.rpm -o rpm && rpm -i rpm && \ RUN curl ftp://ftp.pbone.net/mirror/ftp.centos.org/6.8/os/i386/Packages/ncurses-static-5.7-4.20090207.el6.i686.rpm -o rpm && rpm -i rpm && \
curl ftp://ftp.pbone.net/mirror/ftp.centos.org/6.7/os/i386/Packages/gpm-static-1.20.6-12.el6.i686.rpm -o rpm && rpm -i rpm curl ftp://ftp.pbone.net/mirror/ftp.centos.org/6.8/os/i386/Packages/gpm-static-1.20.6-12.el6.i686.rpm -o rpm && rpm -i rpm
ENV GOROOT_BOOTSTRAP /go1.4 ENV GOROOT_BOOTSTRAP /go1.4
ENV GOROOT /go1.5 ENV GOROOT /go1.7
ENV PATH /go1.5/bin:$PATH ENV PATH /go1.7/bin:$PATH
# For i386 build # For i386 build
RUN cd $GOROOT/src && GOARCH=386 ./make.bash RUN cd $GOROOT/src && GOARCH=386 ./make.bash

View File

@@ -47,33 +47,6 @@ proportional to the number of CPU cores. On my MacBook Pro (Mid 2012), the new
version was shown to be an order of magnitude faster on certain cases. It also version was shown to be an order of magnitude faster on certain cases. It also
starts much faster though the difference may not be noticeable. starts much faster though the difference may not be noticeable.
Differences with Ruby version
-----------------------------
The Go version is designed to be perfectly compatible with the previous Ruby
version. The only behavioral difference is that the new version ignores the
numeric argument to `--sort=N` option and always sorts the result regardless
of the number of matches. The value was introduced to limit the response time
of the query, but the Go version is blazingly fast (almost instant response
even for 1M+ items) so I decided that it's no longer required.
System requirements
-------------------
Currently, prebuilt binaries are provided only for OS X and Linux. The install
script will fall back to the legacy Ruby version on the other systems, but if
you have Go 1.4 installed, you can try building it yourself.
However, as pointed out in [golang.org/doc/install][req], the Go version may
not run on CentOS/RHEL 5.x, and if that's the case, the install script will
choose the Ruby version instead.
The Go version depends on [ncurses][ncurses] and some Unix system calls, so it
shouldn't run natively on Windows at the moment. But it won't be impossible to
support Windows by falling back to a cross-platform alternative such as
[termbox][termbox] only on Windows. If you're interested in making fzf work on
Windows, please let me know.
Build Build
----- -----
@@ -88,16 +61,22 @@ make install
make linux make linux
``` ```
Contribution Test
------------ ----
For the time being, I will not add or accept any new features until we can be Unit tests can be run with `make test`. Integration tests are written in Ruby
sure that the implementation is stable and we have a sufficient number of test script that should be run on tmux.
cases. However, fixes for obvious bugs and new test cases are welcome.
I also care much about the performance of the implementation, so please make ```sh
sure that your change does not result in performance regression. And please be # Unit tests
noted that we don't have a quantitative measure of the performance yet. make test
# Install the executable to ../bin directory
make install
# Integration tests
ruby ../test/test_go.rb
```
Third-party libraries used Third-party libraries used
-------------------------- --------------------------

View File

@@ -1,19 +1,91 @@
package algo package algo
/*
Algorithm
---------
FuzzyMatchV1 finds the first "fuzzy" occurrence of the pattern within the given
text in O(n) time where n is the length of the text. Once the position of the
last character is located, it traverses backwards to see if there's a shorter
substring that matches the pattern.
a_____b___abc__ To find "abc"
*-----*-----*> 1. Forward scan
<*** 2. Backward scan
The algorithm is simple and fast, but as it only sees the first occurrence,
it is not guaranteed to find the occurrence with the highest score.
a_____b__c__abc
*-----*--* ***
FuzzyMatchV2 implements a modified version of Smith-Waterman algorithm to find
the optimal solution (highest score) according to the scoring criteria. Unlike
the original algorithm, omission or mismatch of a character in the pattern is
not allowed.
Performance
-----------
The new V2 algorithm is slower than V1 as it examines all occurrences of the
pattern instead of stopping immediately after finding the first one. The time
complexity of the algorithm is O(nm) if a match is found and O(n) otherwise
where n is the length of the item and m is the length of the pattern. Thus, the
performance overhead may not be noticeable for a query with high selectivity.
However, if the performance is more important than the quality of the result,
you can still choose v1 algorithm with --algo=v1.
Scoring criteria
----------------
- We prefer matches at special positions, such as the start of a word, or
uppercase character in camelCase words.
- That is, we prefer an occurrence of the pattern with more characters
matching at special positions, even if the total match length is longer.
e.g. "fuzzyfinder" vs. "fuzzy-finder" on "ff"
````````````
- Also, if the first character in the pattern appears at one of the special
positions, the bonus point for the position is multiplied by a constant
as it is extremely likely that the first character in the typed pattern
has more significance than the rest.
e.g. "fo-bar" vs. "foob-r" on "br"
``````
- But since fzf is still a fuzzy finder, not an acronym finder, we should also
consider the total length of the matched substring. This is why we have the
gap penalty. The gap penalty increases as the length of the gap (distance
between the matching characters) increases, so the effect of the bonus is
eventually cancelled at some point.
e.g. "fuzzyfinder" vs. "fuzzy-blurry-finder" on "ff"
```````````
- Consequently, it is crucial to find the right balance between the bonus
and the gap penalty. The parameters were chosen that the bonus is cancelled
when the gap size increases beyond 8 characters.
- The bonus mechanism can have the undesirable side effect where consecutive
matches are ranked lower than the ones with gaps.
e.g. "foobar" vs. "foo-bar" on "foob"
```````
- To correct this anomaly, we also give extra bonus point to each character
in a consecutive matching chunk.
e.g. "foobar" vs. "foo-bar" on "foob"
``````
- The amount of consecutive bonus is primarily determined by the bonus of the
first character in the chunk.
e.g. "foobar" vs. "out-of-bound" on "oob"
````````````
*/
import ( import (
"fmt"
"strings" "strings"
"unicode" "unicode"
"github.com/junegunn/fzf/src/util" "github.com/junegunn/fzf/src/util"
) )
/* var DEBUG bool
* String matching algorithms here do not use strings.ToLower to avoid
* performance penalty. And they assume pattern runes are given in lowercase
* letters when caseSensitive is false.
*
* In short: They try to do as little work as possible.
*/
func indexAt(index int, max int, forward bool) int { func indexAt(index int, max int, forward bool) int {
if forward { if forward {
@@ -22,23 +94,50 @@ func indexAt(index int, max int, forward bool) int {
return max - index - 1 return max - index - 1
} }
func runeAt(text util.Chars, index int, max int, forward bool) rune { // Result contains the results of running a match function.
if forward {
return text.Get(index)
}
return text.Get(max - index - 1)
}
// Result conatins the results of running a match function.
type Result struct { type Result struct {
Start int32 // TODO int32 should suffice
End int32 Start int
End int
// Items are basically sorted by the lengths of matched substrings. Score int
// But we slightly adjust the score with bonus for better results.
Bonus int32
} }
const (
scoreMatch = 16
scoreGapStart = -3
scoreGapExtention = -1
// We prefer matches at the beginning of a word, but the bonus should not be
// too great to prevent the longer acronym matches from always winning over
// shorter fuzzy matches. The bonus point here was specifically chosen that
// the bonus is cancelled when the gap between the acronyms grows over
// 8 characters, which is approximately the average length of the words found
// in web2 dictionary and my file system.
bonusBoundary = scoreMatch / 2
// Although bonus point for non-word characters is non-contextual, we need it
// for computing bonus points for consecutive chunks starting with a non-word
// character.
bonusNonWord = scoreMatch / 2
// Edge-triggered bonus for matches in camelCase words.
// Compared to word-boundary case, they don't accompany single-character gaps
// (e.g. FooBar vs. foo-bar), so we deduct bonus point accordingly.
bonusCamel123 = bonusBoundary + scoreGapExtention
// Minimum bonus point given to characters in consecutive chunks.
// Note that bonus points for consecutive matches shouldn't have needed if we
// used fixed match score as in the original algorithm.
bonusConsecutive = -(scoreGapStart + scoreGapExtention)
// The first character in the typed pattern usually has more significance
// than the rest so it's important that it appears at special positions where
// bonus points are given. e.g. "to-go" vs. "ongoing" on "og" or on "ogo".
// The amount of the extra bonus should be limited so that the gap penalty is
// still respected.
bonusFirstCharMultiplier = 2
)
type charClass int type charClass int
const ( const (
@@ -49,85 +148,351 @@ const (
charNumber charNumber
) )
func evaluateBonus(caseSensitive bool, text util.Chars, pattern []rune, sidx int, eidx int) int32 { func posArray(withPos bool, len int) *[]int {
var bonus int32 if withPos {
pidx := 0 pos := make([]int, 0, len)
lenPattern := len(pattern) return &pos
consecutive := false
prevClass := charNonWord
for index := util.Max(0, sidx-1); index < eidx; index++ {
char := text.Get(index)
var class charClass
if unicode.IsLower(char) {
class = charLower
} else if unicode.IsUpper(char) {
class = charUpper
} else if unicode.IsLetter(char) {
class = charLetter
} else if unicode.IsNumber(char) {
class = charNumber
} else {
class = charNonWord
}
var point int32
if prevClass == charNonWord && class != charNonWord {
// Word boundary
point = 2
} else if prevClass == charLower && class == charUpper ||
prevClass != charNumber && class == charNumber {
// camelCase letter123
point = 1
}
prevClass = class
if index >= sidx {
if !caseSensitive {
if char >= 'A' && char <= 'Z' {
char += 32
} else if char > unicode.MaxASCII {
char = unicode.To(unicode.LowerCase, char)
}
}
pchar := pattern[pidx]
if pchar == char {
// Boost bonus for the first character in the pattern
if pidx == 0 {
point *= 2
}
// Bonus to consecutive matching chars
if consecutive {
point++
}
bonus += point
if pidx++; pidx == lenPattern {
break
}
consecutive = true
} else {
consecutive = false
}
}
} }
return bonus return nil
} }
// FuzzyMatch performs fuzzy-match func alloc16(offset int, slab *util.Slab, size int, clear bool) (int, []int16) {
func FuzzyMatch(caseSensitive bool, forward bool, text util.Chars, pattern []rune) Result { if slab != nil && cap(slab.I16) > offset+size {
if len(pattern) == 0 { slice := slab.I16[offset : offset+size]
return Result{0, 0, 0} if clear {
for idx := range slice {
slice[idx] = 0
}
}
return offset + size, slice
}
return offset, make([]int16, size)
}
func alloc32(offset int, slab *util.Slab, size int, clear bool) (int, []int32) {
if slab != nil && cap(slab.I32) > offset+size {
slice := slab.I32[offset : offset+size]
if clear {
for idx := range slice {
slice[idx] = 0
}
}
return offset + size, slice
}
return offset, make([]int32, size)
}
func charClassOfAscii(char rune) charClass {
if char >= 'a' && char <= 'z' {
return charLower
} else if char >= 'A' && char <= 'Z' {
return charUpper
} else if char >= '0' && char <= '9' {
return charNumber
}
return charNonWord
}
func charClassOfNonAscii(char rune) charClass {
if unicode.IsLower(char) {
return charLower
} else if unicode.IsUpper(char) {
return charUpper
} else if unicode.IsNumber(char) {
return charNumber
} else if unicode.IsLetter(char) {
return charLetter
}
return charNonWord
}
func charClassOf(char rune) charClass {
if char <= unicode.MaxASCII {
return charClassOfAscii(char)
}
return charClassOfNonAscii(char)
}
func bonusFor(prevClass charClass, class charClass) int16 {
if prevClass == charNonWord && class != charNonWord {
// Word boundary
return bonusBoundary
} else if prevClass == charLower && class == charUpper ||
prevClass != charNumber && class == charNumber {
// camelCase letter123
return bonusCamel123
} else if class == charNonWord {
return bonusNonWord
}
return 0
}
func bonusAt(input util.Chars, idx int) int16 {
if idx == 0 {
return bonusBoundary
}
return bonusFor(charClassOf(input.Get(idx-1)), charClassOf(input.Get(idx)))
}
type Algo func(caseSensitive bool, forward bool, input util.Chars, pattern []rune, withPos bool, slab *util.Slab) (Result, *[]int)
func FuzzyMatchV2(caseSensitive bool, forward bool, input util.Chars, pattern []rune, withPos bool, slab *util.Slab) (Result, *[]int) {
// Assume that pattern is given in lowercase if case-insensitive.
// First check if there's a match and calculate bonus for each position.
// If the input string is too long, consider finding the matching chars in
// this phase as well (non-optimal alignment).
N := input.Length()
M := len(pattern)
switch M {
case 0:
return Result{0, 0, 0}, posArray(withPos, M)
case 1:
return ExactMatchNaive(caseSensitive, forward, input, pattern[0:1], withPos, slab)
}
// Since O(nm) algorithm can be prohibitively expensive for large input,
// we fall back to the greedy algorithm.
if slab != nil && N*M > cap(slab.I16) {
return FuzzyMatchV1(caseSensitive, forward, input, pattern, withPos, slab)
}
// Reuse pre-allocated integer slice to avoid unnecessary sweeping of garbages
offset16 := 0
offset32 := 0
// Bonus point for each position
offset16, B := alloc16(offset16, slab, N, false)
// The first occurrence of each character in the pattern
offset32, F := alloc32(offset32, slab, M, false)
// Rune array
offset32, T := alloc32(offset32, slab, N, false)
// Phase 1. Check if there's a match and calculate bonus for each point
pidx, lastIdx, prevClass := 0, 0, charNonWord
for idx := 0; idx < N; idx++ {
char := input.Get(idx)
var class charClass
if char <= unicode.MaxASCII {
class = charClassOfAscii(char)
} else {
class = charClassOfNonAscii(char)
}
if !caseSensitive && class == charUpper {
if char <= unicode.MaxASCII {
char += 32
} else {
char = unicode.To(unicode.LowerCase, char)
}
}
T[idx] = char
B[idx] = bonusFor(prevClass, class)
prevClass = class
if pidx < M {
if char == pattern[pidx] {
lastIdx = idx
F[pidx] = int32(idx)
pidx++
}
} else {
if char == pattern[M-1] {
lastIdx = idx
}
}
}
if pidx != M {
return Result{-1, -1, 0}, nil
}
// Phase 2. Fill in score matrix (H)
// Unlike the original algorithm, we do not allow omission.
width := lastIdx - int(F[0]) + 1
offset16, H := alloc16(offset16, slab, width*M, false)
// Possible length of consecutive chunk at each position.
offset16, C := alloc16(offset16, slab, width*M, false)
maxScore, maxScorePos := int16(0), 0
for i := 0; i < M; i++ {
I := i * width
inGap := false
for j := int(F[i]); j <= lastIdx; j++ {
j0 := j - int(F[0])
var s1, s2, consecutive int16
if j > int(F[i]) {
if inGap {
s2 = H[I+j0-1] + scoreGapExtention
} else {
s2 = H[I+j0-1] + scoreGapStart
}
}
if pattern[i] == T[j] {
var diag int16
if i > 0 && j0 > 0 {
diag = H[I-width+j0-1]
}
s1 = diag + scoreMatch
b := B[j]
if i > 0 {
// j > 0 if i > 0
consecutive = C[I-width+j0-1] + 1
// Break consecutive chunk
if b == bonusBoundary {
consecutive = 1
} else if consecutive > 1 {
b = util.Max16(b, util.Max16(bonusConsecutive, B[j-int(consecutive)+1]))
}
} else {
consecutive = 1
b *= bonusFirstCharMultiplier
}
if s1+b < s2 {
s1 += B[j]
consecutive = 0
} else {
s1 += b
}
}
C[I+j0] = consecutive
inGap = s1 < s2
score := util.Max16(util.Max16(s1, s2), 0)
if i == M-1 && (forward && score > maxScore || !forward && score >= maxScore) {
maxScore, maxScorePos = score, j
}
H[I+j0] = score
}
if DEBUG {
if i == 0 {
fmt.Print(" ")
for j := int(F[i]); j <= lastIdx; j++ {
fmt.Printf(" " + string(input.Get(j)) + " ")
}
fmt.Println()
}
fmt.Print(string(pattern[i]) + " ")
for idx := int(F[0]); idx < int(F[i]); idx++ {
fmt.Print(" 0 ")
}
for idx := int(F[i]); idx <= lastIdx; idx++ {
fmt.Printf("%2d ", H[i*width+idx-int(F[0])])
}
fmt.Println()
fmt.Print(" ")
for idx, p := range C[I : I+width] {
if idx+int(F[0]) < int(F[i]) {
p = 0
}
fmt.Printf("%2d ", p)
}
fmt.Println()
}
}
// Phase 3. (Optional) Backtrace to find character positions
pos := posArray(withPos, M)
j := int(F[0])
if withPos {
i := M - 1
j = maxScorePos
preferMatch := true
for {
I := i * width
j0 := j - int(F[0])
s := H[I+j0]
var s1, s2 int16
if i > 0 && j >= int(F[i]) {
s1 = H[I-width+j0-1]
}
if j > int(F[i]) {
s2 = H[I+j0-1]
}
if s > s1 && (s > s2 || s == s2 && preferMatch) {
*pos = append(*pos, j)
if i == 0 {
break
}
i--
}
preferMatch = C[I+j0] > 1 || I+width+j0+1 < len(C) && C[I+width+j0+1] > 0
j--
}
}
// Start offset we return here is only relevant when begin tiebreak is used.
// However finding the accurate offset requires backtracking, and we don't
// want to pay extra cost for the option that has lost its importance.
return Result{j, maxScorePos + 1, int(maxScore)}, pos
}
// Implement the same sorting criteria as V2
func calculateScore(caseSensitive bool, text util.Chars, pattern []rune, sidx int, eidx int, withPos bool) (int, *[]int) {
pidx, score, inGap, consecutive, firstBonus := 0, 0, false, 0, int16(0)
pos := posArray(withPos, len(pattern))
prevClass := charNonWord
if sidx > 0 {
prevClass = charClassOf(text.Get(sidx - 1))
}
for idx := sidx; idx < eidx; idx++ {
char := text.Get(idx)
class := charClassOf(char)
if !caseSensitive {
if char >= 'A' && char <= 'Z' {
char += 32
} else if char > unicode.MaxASCII {
char = unicode.To(unicode.LowerCase, char)
}
}
if char == pattern[pidx] {
if withPos {
*pos = append(*pos, idx)
}
score += scoreMatch
bonus := bonusFor(prevClass, class)
if consecutive == 0 {
firstBonus = bonus
} else {
// Break consecutive chunk
if bonus == bonusBoundary {
firstBonus = bonus
}
bonus = util.Max16(util.Max16(bonus, firstBonus), bonusConsecutive)
}
if pidx == 0 {
score += int(bonus * bonusFirstCharMultiplier)
} else {
score += int(bonus)
}
inGap = false
consecutive++
pidx++
} else {
if inGap {
score += scoreGapExtention
} else {
score += scoreGapStart
}
inGap = true
consecutive = 0
firstBonus = 0
}
prevClass = class
}
return score, pos
}
// FuzzyMatchV1 performs fuzzy-match
func FuzzyMatchV1(caseSensitive bool, forward bool, text util.Chars, pattern []rune, withPos bool, slab *util.Slab) (Result, *[]int) {
if len(pattern) == 0 {
return Result{0, 0, 0}, nil
} }
// 0. (FIXME) How to find the shortest match?
// a_____b__c__abc
// ^^^^^^^^^^ ^^^
// 1. forward scan (abc)
// *-----*-----*>
// a_____b___abc__
// 2. reverse scan (cba)
// a_____b___abc__
// <***
pidx := 0 pidx := 0
sidx := -1 sidx := -1
eidx := -1 eidx := -1
@@ -136,7 +501,7 @@ func FuzzyMatch(caseSensitive bool, forward bool, text util.Chars, pattern []run
lenPattern := len(pattern) lenPattern := len(pattern)
for index := 0; index < lenRunes; index++ { for index := 0; index < lenRunes; index++ {
char := runeAt(text, index, lenRunes, forward) char := text.Get(indexAt(index, lenRunes, forward))
// This is considerably faster than blindly applying strings.ToLower to the // This is considerably faster than blindly applying strings.ToLower to the
// whole string // whole string
if !caseSensitive { if !caseSensitive {
@@ -164,7 +529,8 @@ func FuzzyMatch(caseSensitive bool, forward bool, text util.Chars, pattern []run
if sidx >= 0 && eidx >= 0 { if sidx >= 0 && eidx >= 0 {
pidx-- pidx--
for index := eidx - 1; index >= sidx; index-- { for index := eidx - 1; index >= sidx; index-- {
char := runeAt(text, index, lenRunes, forward) tidx := indexAt(index, lenRunes, forward)
char := text.Get(tidx)
if !caseSensitive { if !caseSensitive {
if char >= 'A' && char <= 'Z' { if char >= 'A' && char <= 'Z' {
char += 32 char += 32
@@ -173,7 +539,8 @@ func FuzzyMatch(caseSensitive bool, forward bool, text util.Chars, pattern []run
} }
} }
pchar := pattern[indexAt(pidx, lenPattern, forward)] pidx_ := indexAt(pidx, lenPattern, forward)
pchar := pattern[pidx_]
if char == pchar { if char == pchar {
if pidx--; pidx < 0 { if pidx--; pidx < 0 {
sidx = index sidx = index
@@ -182,16 +549,14 @@ func FuzzyMatch(caseSensitive bool, forward bool, text util.Chars, pattern []run
} }
} }
// Calculate the bonus. This can't be done at the same time as the
// pattern scan above because 'forward' may be false.
if !forward { if !forward {
sidx, eidx = lenRunes-eidx, lenRunes-sidx sidx, eidx = lenRunes-eidx, lenRunes-sidx
} }
return Result{int32(sidx), int32(eidx), score, pos := calculateScore(caseSensitive, text, pattern, sidx, eidx, withPos)
evaluateBonus(caseSensitive, text, pattern, sidx, eidx)} return Result{sidx, eidx, score}, pos
} }
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }
// ExactMatchNaive is a basic string searching algorithm that handles case // ExactMatchNaive is a basic string searching algorithm that handles case
@@ -199,23 +564,28 @@ func FuzzyMatch(caseSensitive bool, forward bool, text util.Chars, pattern []run
// of strings.ToLower + strings.Index for typical fzf use cases where input // of strings.ToLower + strings.Index for typical fzf use cases where input
// strings and patterns are not very long. // strings and patterns are not very long.
// //
// We might try to implement better algorithms in the future: // Since 0.15.0, this function searches for the match with the highest
// http://en.wikipedia.org/wiki/String_searching_algorithm // bonus point, instead of stopping immediately after finding the first match.
func ExactMatchNaive(caseSensitive bool, forward bool, text util.Chars, pattern []rune) Result { // The solution is much cheaper since there is only one possible alignment of
// the pattern.
func ExactMatchNaive(caseSensitive bool, forward bool, text util.Chars, pattern []rune, withPos bool, slab *util.Slab) (Result, *[]int) {
if len(pattern) == 0 { if len(pattern) == 0 {
return Result{0, 0, 0} return Result{0, 0, 0}, nil
} }
lenRunes := text.Length() lenRunes := text.Length()
lenPattern := len(pattern) lenPattern := len(pattern)
if lenRunes < lenPattern { if lenRunes < lenPattern {
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }
// For simplicity, only look at the bonus at the first character position
pidx := 0 pidx := 0
bestPos, bonus, bestBonus := -1, int16(0), int16(-1)
for index := 0; index < lenRunes; index++ { for index := 0; index < lenRunes; index++ {
char := runeAt(text, index, lenRunes, forward) index_ := indexAt(index, lenRunes, forward)
char := text.Get(index_)
if !caseSensitive { if !caseSensitive {
if char >= 'A' && char <= 'Z' { if char >= 'A' && char <= 'Z' {
char += 32 char += 32
@@ -223,33 +593,51 @@ func ExactMatchNaive(caseSensitive bool, forward bool, text util.Chars, pattern
char = unicode.To(unicode.LowerCase, char) char = unicode.To(unicode.LowerCase, char)
} }
} }
pchar := pattern[indexAt(pidx, lenPattern, forward)] pidx_ := indexAt(pidx, lenPattern, forward)
pchar := pattern[pidx_]
if pchar == char { if pchar == char {
if pidx_ == 0 {
bonus = bonusAt(text, index_)
}
pidx++ pidx++
if pidx == lenPattern { if pidx == lenPattern {
var sidx, eidx int if bonus > bestBonus {
if forward { bestPos, bestBonus = index, bonus
sidx = index - lenPattern + 1
eidx = index + 1
} else {
sidx = lenRunes - (index + 1)
eidx = lenRunes - (index - lenPattern + 1)
} }
return Result{int32(sidx), int32(eidx), if bonus == bonusBoundary {
evaluateBonus(caseSensitive, text, pattern, sidx, eidx)} break
}
index -= pidx - 1
pidx, bonus = 0, 0
} }
} else { } else {
index -= pidx index -= pidx
pidx = 0 pidx, bonus = 0, 0
} }
} }
return Result{-1, -1, 0} if bestPos >= 0 {
var sidx, eidx int
if forward {
sidx = bestPos - lenPattern + 1
eidx = bestPos + 1
} else {
sidx = lenRunes - (bestPos + 1)
eidx = lenRunes - (bestPos - lenPattern + 1)
}
score, _ := calculateScore(caseSensitive, text, pattern, sidx, eidx, false)
return Result{sidx, eidx, score}, nil
}
return Result{-1, -1, 0}, nil
} }
// PrefixMatch performs prefix-match // PrefixMatch performs prefix-match
func PrefixMatch(caseSensitive bool, forward bool, text util.Chars, pattern []rune) Result { func PrefixMatch(caseSensitive bool, forward bool, text util.Chars, pattern []rune, withPos bool, slab *util.Slab) (Result, *[]int) {
if len(pattern) == 0 {
return Result{0, 0, 0}, nil
}
if text.Length() < len(pattern) { if text.Length() < len(pattern) {
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }
for index, r := range pattern { for index, r := range pattern {
@@ -258,20 +646,24 @@ func PrefixMatch(caseSensitive bool, forward bool, text util.Chars, pattern []ru
char = unicode.ToLower(char) char = unicode.ToLower(char)
} }
if char != r { if char != r {
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }
} }
lenPattern := len(pattern) lenPattern := len(pattern)
return Result{0, int32(lenPattern), score, _ := calculateScore(caseSensitive, text, pattern, 0, lenPattern, false)
evaluateBonus(caseSensitive, text, pattern, 0, lenPattern)} return Result{0, lenPattern, score}, nil
} }
// SuffixMatch performs suffix-match // SuffixMatch performs suffix-match
func SuffixMatch(caseSensitive bool, forward bool, text util.Chars, pattern []rune) Result { func SuffixMatch(caseSensitive bool, forward bool, text util.Chars, pattern []rune, withPos bool, slab *util.Slab) (Result, *[]int) {
trimmedLen := text.Length() - text.TrailingWhitespaces() lenRunes := text.Length()
trimmedLen := lenRunes - text.TrailingWhitespaces()
if len(pattern) == 0 {
return Result{trimmedLen, trimmedLen, 0}, nil
}
diff := trimmedLen - len(pattern) diff := trimmedLen - len(pattern)
if diff < 0 { if diff < 0 {
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }
for index, r := range pattern { for index, r := range pattern {
@@ -280,28 +672,29 @@ func SuffixMatch(caseSensitive bool, forward bool, text util.Chars, pattern []ru
char = unicode.ToLower(char) char = unicode.ToLower(char)
} }
if char != r { if char != r {
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }
} }
lenPattern := len(pattern) lenPattern := len(pattern)
sidx := trimmedLen - lenPattern sidx := trimmedLen - lenPattern
eidx := trimmedLen eidx := trimmedLen
return Result{int32(sidx), int32(eidx), score, _ := calculateScore(caseSensitive, text, pattern, sidx, eidx, false)
evaluateBonus(caseSensitive, text, pattern, sidx, eidx)} return Result{sidx, eidx, score}, nil
} }
// EqualMatch performs equal-match // EqualMatch performs equal-match
func EqualMatch(caseSensitive bool, forward bool, text util.Chars, pattern []rune) Result { func EqualMatch(caseSensitive bool, forward bool, text util.Chars, pattern []rune, withPos bool, slab *util.Slab) (Result, *[]int) {
// Note: EqualMatch always return a zero bonus. lenPattern := len(pattern)
if text.Length() != len(pattern) { if text.Length() != lenPattern {
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }
runesStr := text.ToString() runesStr := text.ToString()
if !caseSensitive { if !caseSensitive {
runesStr = strings.ToLower(runesStr) runesStr = strings.ToLower(runesStr)
} }
if runesStr == string(pattern) { if runesStr == string(pattern) {
return Result{0, int32(len(pattern)), 0} return Result{0, lenPattern, (scoreMatch+bonusBoundary)*lenPattern +
(bonusFirstCharMultiplier-1)*bonusBoundary}, nil
} }
return Result{-1, -1, 0} return Result{-1, -1, 0}, nil
} }

View File

@@ -1,97 +1,166 @@
package algo package algo
import ( import (
"math"
"sort"
"strings" "strings"
"testing" "testing"
"github.com/junegunn/fzf/src/util" "github.com/junegunn/fzf/src/util"
) )
func assertMatch(t *testing.T, fun func(bool, bool, util.Chars, []rune) Result, caseSensitive, forward bool, input, pattern string, sidx int32, eidx int32, bonus int32) { func assertMatch(t *testing.T, fun Algo, caseSensitive, forward bool, input, pattern string, sidx int, eidx int, score int) {
if !caseSensitive { if !caseSensitive {
pattern = strings.ToLower(pattern) pattern = strings.ToLower(pattern)
} }
res := fun(caseSensitive, forward, util.RunesToChars([]rune(input)), []rune(pattern)) res, pos := fun(caseSensitive, forward, util.RunesToChars([]rune(input)), []rune(pattern), true, nil)
if res.Start != sidx { var start, end int
t.Errorf("Invalid start index: %d (expected: %d, %s / %s)", res.Start, sidx, input, pattern) if pos == nil || len(*pos) == 0 {
start = res.Start
end = res.End
} else {
sort.Ints(*pos)
start = (*pos)[0]
end = (*pos)[len(*pos)-1] + 1
} }
if res.End != eidx { if start != sidx {
t.Errorf("Invalid end index: %d (expected: %d, %s / %s)", res.End, eidx, input, pattern) t.Errorf("Invalid start index: %d (expected: %d, %s / %s)", start, sidx, input, pattern)
} }
if res.Bonus != bonus { if end != eidx {
t.Errorf("Invalid bonus: %d (expected: %d, %s / %s)", res.Bonus, bonus, input, pattern) t.Errorf("Invalid end index: %d (expected: %d, %s / %s)", end, eidx, input, pattern)
}
if res.Score != score {
t.Errorf("Invalid score: %d (expected: %d, %s / %s)", res.Score, score, input, pattern)
} }
} }
func TestFuzzyMatch(t *testing.T) { func TestFuzzyMatch(t *testing.T) {
assertMatch(t, FuzzyMatch, false, true, "fooBarbaz", "oBZ", 2, 9, 2) for _, fn := range []Algo{FuzzyMatchV1, FuzzyMatchV2} {
assertMatch(t, FuzzyMatch, false, true, "foo bar baz", "fbb", 0, 9, 8) for _, forward := range []bool{true, false} {
assertMatch(t, FuzzyMatch, false, true, "/AutomatorDocument.icns", "rdoc", 9, 13, 4) assertMatch(t, fn, false, forward, "fooBarbaz1", "oBZ", 2, 9,
assertMatch(t, FuzzyMatch, false, true, "/man1/zshcompctl.1", "zshc", 6, 10, 7) scoreMatch*3+bonusCamel123+scoreGapStart+scoreGapExtention*3)
assertMatch(t, FuzzyMatch, false, true, "/.oh-my-zsh/cache", "zshc", 8, 13, 8) assertMatch(t, fn, false, forward, "foo bar baz", "fbb", 0, 9,
assertMatch(t, FuzzyMatch, false, true, "ab0123 456", "12356", 3, 10, 3) scoreMatch*3+bonusBoundary*bonusFirstCharMultiplier+
assertMatch(t, FuzzyMatch, false, true, "abc123 456", "12356", 3, 10, 5) bonusBoundary*2+2*scoreGapStart+4*scoreGapExtention)
assertMatch(t, fn, false, forward, "/AutomatorDocument.icns", "rdoc", 9, 13,
scoreMatch*4+bonusCamel123+bonusConsecutive*2)
assertMatch(t, fn, false, forward, "/man1/zshcompctl.1", "zshc", 6, 10,
scoreMatch*4+bonusBoundary*bonusFirstCharMultiplier+bonusBoundary*3)
assertMatch(t, fn, false, forward, "/.oh-my-zsh/cache", "zshc", 8, 13,
scoreMatch*4+bonusBoundary*bonusFirstCharMultiplier+bonusBoundary*3+scoreGapStart)
assertMatch(t, fn, false, forward, "ab0123 456", "12356", 3, 10,
scoreMatch*5+bonusConsecutive*3+scoreGapStart+scoreGapExtention)
assertMatch(t, fn, false, forward, "abc123 456", "12356", 3, 10,
scoreMatch*5+bonusCamel123*bonusFirstCharMultiplier+bonusCamel123*2+bonusConsecutive+scoreGapStart+scoreGapExtention)
assertMatch(t, fn, false, forward, "foo/bar/baz", "fbb", 0, 9,
scoreMatch*3+bonusBoundary*bonusFirstCharMultiplier+
bonusBoundary*2+2*scoreGapStart+4*scoreGapExtention)
assertMatch(t, fn, false, forward, "fooBarBaz", "fbb", 0, 7,
scoreMatch*3+bonusBoundary*bonusFirstCharMultiplier+
bonusCamel123*2+2*scoreGapStart+2*scoreGapExtention)
assertMatch(t, fn, false, forward, "foo barbaz", "fbb", 0, 8,
scoreMatch*3+bonusBoundary*bonusFirstCharMultiplier+bonusBoundary+
scoreGapStart*2+scoreGapExtention*3)
assertMatch(t, fn, false, forward, "fooBar Baz", "foob", 0, 4,
scoreMatch*4+bonusBoundary*bonusFirstCharMultiplier+bonusBoundary*3)
assertMatch(t, fn, false, forward, "xFoo-Bar Baz", "foo-b", 1, 6,
scoreMatch*5+bonusCamel123*bonusFirstCharMultiplier+bonusCamel123*2+
bonusNonWord+bonusBoundary)
assertMatch(t, FuzzyMatch, false, true, "foo/bar/baz", "fbb", 0, 9, 8) assertMatch(t, fn, true, forward, "fooBarbaz", "oBz", 2, 9,
assertMatch(t, FuzzyMatch, false, true, "fooBarBaz", "fbb", 0, 7, 6) scoreMatch*3+bonusCamel123+scoreGapStart+scoreGapExtention*3)
assertMatch(t, FuzzyMatch, false, true, "foo barbaz", "fbb", 0, 8, 6) assertMatch(t, fn, true, forward, "Foo/Bar/Baz", "FBB", 0, 9,
assertMatch(t, FuzzyMatch, false, true, "fooBar Baz", "foob", 0, 4, 8) scoreMatch*3+bonusBoundary*(bonusFirstCharMultiplier+2)+
assertMatch(t, FuzzyMatch, true, true, "fooBarbaz", "oBZ", -1, -1, 0) scoreGapStart*2+scoreGapExtention*4)
assertMatch(t, FuzzyMatch, true, true, "fooBarbaz", "oBz", 2, 9, 2) assertMatch(t, fn, true, forward, "FooBarBaz", "FBB", 0, 7,
assertMatch(t, FuzzyMatch, true, true, "Foo Bar Baz", "fbb", -1, -1, 0) scoreMatch*3+bonusBoundary*bonusFirstCharMultiplier+bonusCamel123*2+
assertMatch(t, FuzzyMatch, true, true, "Foo/Bar/Baz", "FBB", 0, 9, 8) scoreGapStart*2+scoreGapExtention*2)
assertMatch(t, FuzzyMatch, true, true, "FooBarBaz", "FBB", 0, 7, 6) assertMatch(t, fn, true, forward, "FooBar Baz", "FooB", 0, 4,
assertMatch(t, FuzzyMatch, true, true, "foo BarBaz", "fBB", 0, 8, 7) scoreMatch*4+bonusBoundary*bonusFirstCharMultiplier+bonusBoundary*2+
assertMatch(t, FuzzyMatch, true, true, "FooBar Baz", "FooB", 0, 4, 8) util.Max(bonusCamel123, bonusBoundary))
assertMatch(t, FuzzyMatch, true, true, "fooBarbaz", "fooBarbazz", -1, -1, 0)
// Consecutive bonus updated
assertMatch(t, fn, true, forward, "foo-bar", "o-ba", 2, 6,
scoreMatch*4+bonusBoundary*3)
// Non-match
assertMatch(t, fn, true, forward, "fooBarbaz", "oBZ", -1, -1, 0)
assertMatch(t, fn, true, forward, "Foo Bar Baz", "fbb", -1, -1, 0)
assertMatch(t, fn, true, forward, "fooBarbaz", "fooBarbazz", -1, -1, 0)
}
}
} }
func TestFuzzyMatchBackward(t *testing.T) { func TestFuzzyMatchBackward(t *testing.T) {
assertMatch(t, FuzzyMatch, false, true, "foobar fb", "fb", 0, 4, 4) assertMatch(t, FuzzyMatchV1, false, true, "foobar fb", "fb", 0, 4,
assertMatch(t, FuzzyMatch, false, false, "foobar fb", "fb", 7, 9, 5) scoreMatch*2+bonusBoundary*bonusFirstCharMultiplier+
scoreGapStart+scoreGapExtention)
assertMatch(t, FuzzyMatchV1, false, false, "foobar fb", "fb", 7, 9,
scoreMatch*2+bonusBoundary*bonusFirstCharMultiplier+bonusBoundary)
} }
func TestExactMatchNaive(t *testing.T) { func TestExactMatchNaive(t *testing.T) {
for _, dir := range []bool{true, false} { for _, dir := range []bool{true, false} {
assertMatch(t, ExactMatchNaive, false, dir, "fooBarbaz", "oBA", 2, 5, 3)
assertMatch(t, ExactMatchNaive, true, dir, "fooBarbaz", "oBA", -1, -1, 0) assertMatch(t, ExactMatchNaive, true, dir, "fooBarbaz", "oBA", -1, -1, 0)
assertMatch(t, ExactMatchNaive, true, dir, "fooBarbaz", "fooBarbazz", -1, -1, 0) assertMatch(t, ExactMatchNaive, true, dir, "fooBarbaz", "fooBarbazz", -1, -1, 0)
assertMatch(t, ExactMatchNaive, false, dir, "/AutomatorDocument.icns", "rdoc", 9, 13, 4) assertMatch(t, ExactMatchNaive, false, dir, "fooBarbaz", "oBA", 2, 5,
assertMatch(t, ExactMatchNaive, false, dir, "/man1/zshcompctl.1", "zshc", 6, 10, 7) scoreMatch*3+bonusCamel123+bonusConsecutive)
assertMatch(t, ExactMatchNaive, false, dir, "/.oh-my-zsh/cache", "zsh/c", 8, 13, 10) assertMatch(t, ExactMatchNaive, false, dir, "/AutomatorDocument.icns", "rdoc", 9, 13,
scoreMatch*4+bonusCamel123+bonusConsecutive*2)
assertMatch(t, ExactMatchNaive, false, dir, "/man1/zshcompctl.1", "zshc", 6, 10,
scoreMatch*4+bonusBoundary*(bonusFirstCharMultiplier+3))
assertMatch(t, ExactMatchNaive, false, dir, "/.oh-my-zsh/cache", "zsh/c", 8, 13,
scoreMatch*5+bonusBoundary*(bonusFirstCharMultiplier+4))
} }
} }
func TestExactMatchNaiveBackward(t *testing.T) { func TestExactMatchNaiveBackward(t *testing.T) {
assertMatch(t, ExactMatchNaive, false, true, "foobar foob", "oo", 1, 3, 1) assertMatch(t, ExactMatchNaive, false, true, "foobar foob", "oo", 1, 3,
assertMatch(t, ExactMatchNaive, false, false, "foobar foob", "oo", 8, 10, 1) scoreMatch*2+bonusConsecutive)
assertMatch(t, ExactMatchNaive, false, false, "foobar foob", "oo", 8, 10,
scoreMatch*2+bonusConsecutive)
} }
func TestPrefixMatch(t *testing.T) { func TestPrefixMatch(t *testing.T) {
score := (scoreMatch+bonusBoundary)*3 + bonusBoundary*(bonusFirstCharMultiplier-1)
for _, dir := range []bool{true, false} { for _, dir := range []bool{true, false} {
assertMatch(t, PrefixMatch, true, dir, "fooBarbaz", "Foo", -1, -1, 0) assertMatch(t, PrefixMatch, true, dir, "fooBarbaz", "Foo", -1, -1, 0)
assertMatch(t, PrefixMatch, false, dir, "fooBarBaz", "baz", -1, -1, 0) assertMatch(t, PrefixMatch, false, dir, "fooBarBaz", "baz", -1, -1, 0)
assertMatch(t, PrefixMatch, false, dir, "fooBarbaz", "Foo", 0, 3, 6) assertMatch(t, PrefixMatch, false, dir, "fooBarbaz", "Foo", 0, 3, score)
assertMatch(t, PrefixMatch, false, dir, "foOBarBaZ", "foo", 0, 3, 7) assertMatch(t, PrefixMatch, false, dir, "foOBarBaZ", "foo", 0, 3, score)
assertMatch(t, PrefixMatch, false, dir, "f-oBarbaz", "f-o", 0, 3, 8) assertMatch(t, PrefixMatch, false, dir, "f-oBarbaz", "f-o", 0, 3, score)
} }
} }
func TestSuffixMatch(t *testing.T) { func TestSuffixMatch(t *testing.T) {
for _, dir := range []bool{true, false} { for _, dir := range []bool{true, false} {
assertMatch(t, SuffixMatch, false, dir, "fooBarbaz", "Foo", -1, -1, 0)
assertMatch(t, SuffixMatch, false, dir, "fooBarbaz", "baz", 6, 9, 2)
assertMatch(t, SuffixMatch, false, dir, "fooBarBaZ", "baz", 6, 9, 5)
assertMatch(t, SuffixMatch, true, dir, "fooBarbaz", "Baz", -1, -1, 0) assertMatch(t, SuffixMatch, true, dir, "fooBarbaz", "Baz", -1, -1, 0)
assertMatch(t, SuffixMatch, false, dir, "fooBarbaz", "Foo", -1, -1, 0)
assertMatch(t, SuffixMatch, false, dir, "fooBarbaz", "baz", 6, 9,
scoreMatch*3+bonusConsecutive*2)
assertMatch(t, SuffixMatch, false, dir, "fooBarBaZ", "baz", 6, 9,
(scoreMatch+bonusCamel123)*3+bonusCamel123*(bonusFirstCharMultiplier-1))
} }
} }
func TestEmptyPattern(t *testing.T) { func TestEmptyPattern(t *testing.T) {
for _, dir := range []bool{true, false} { for _, dir := range []bool{true, false} {
assertMatch(t, FuzzyMatch, true, dir, "foobar", "", 0, 0, 0) assertMatch(t, FuzzyMatchV1, true, dir, "foobar", "", 0, 0, 0)
assertMatch(t, FuzzyMatchV2, true, dir, "foobar", "", 0, 0, 0)
assertMatch(t, ExactMatchNaive, true, dir, "foobar", "", 0, 0, 0) assertMatch(t, ExactMatchNaive, true, dir, "foobar", "", 0, 0, 0)
assertMatch(t, PrefixMatch, true, dir, "foobar", "", 0, 0, 0) assertMatch(t, PrefixMatch, true, dir, "foobar", "", 0, 0, 0)
assertMatch(t, SuffixMatch, true, dir, "foobar", "", 6, 6, 0) assertMatch(t, SuffixMatch, true, dir, "foobar", "", 6, 6, 0)
} }
} }
func TestLongString(t *testing.T) {
bytes := make([]byte, math.MaxUint16*2)
for i := range bytes {
bytes[i] = 'x'
}
bytes[math.MaxUint16] = 'z'
assertMatch(t, FuzzyMatchV2, true, true, string(bytes), "zx", math.MaxUint16, math.MaxUint16+2, scoreMatch*2+bonusConsecutive)
}

View File

@@ -36,7 +36,7 @@ func init() {
ansiRegex = regexp.MustCompile("\x1b\\[[0-9;]*[mK]") ansiRegex = regexp.MustCompile("\x1b\\[[0-9;]*[mK]")
} }
func extractColor(str string, state *ansiState, proc func(string, *ansiState) bool) (string, []ansiOffset, *ansiState) { func extractColor(str string, state *ansiState, proc func(string, *ansiState) bool) (string, *[]ansiOffset, *ansiState) {
var offsets []ansiOffset var offsets []ansiOffset
var output bytes.Buffer var output bytes.Buffer
@@ -84,7 +84,10 @@ func extractColor(str string, state *ansiState, proc func(string, *ansiState) bo
if proc != nil { if proc != nil {
proc(rest, state) proc(rest, state)
} }
return output.String(), offsets, state if len(offsets) == 0 {
return output.String(), nil, state
}
return output.String(), &offsets, state
} }
func interpretCode(ansiCode string, prevState *ansiState) *ansiState { func interpretCode(ansiCode string, prevState *ansiState) *ansiState {

View File

@@ -16,7 +16,7 @@ func TestExtractColor(t *testing.T) {
src := "hello world" src := "hello world"
var state *ansiState var state *ansiState
clean := "\x1b[0m" clean := "\x1b[0m"
check := func(assertion func(ansiOffsets []ansiOffset, state *ansiState)) { check := func(assertion func(ansiOffsets *[]ansiOffset, state *ansiState)) {
output, ansiOffsets, newState := extractColor(src, state, nil) output, ansiOffsets, newState := extractColor(src, state, nil)
state = newState state = newState
if output != "hello world" { if output != "hello world" {
@@ -26,127 +26,127 @@ func TestExtractColor(t *testing.T) {
assertion(ansiOffsets, state) assertion(ansiOffsets, state)
} }
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) > 0 { if offsets != nil {
t.Fail() t.Fail()
} }
}) })
state = nil state = nil
src = "\x1b[0mhello world" src = "\x1b[0mhello world"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) > 0 { if offsets != nil {
t.Fail() t.Fail()
} }
}) })
state = nil state = nil
src = "\x1b[1mhello world" src = "\x1b[1mhello world"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
assert(offsets[0], 0, 11, -1, -1, true) assert((*offsets)[0], 0, 11, -1, -1, true)
}) })
state = nil state = nil
src = "\x1b[1mhello \x1b[mworld" src = "\x1b[1mhello \x1b[mworld"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
assert(offsets[0], 0, 6, -1, -1, true) assert((*offsets)[0], 0, 6, -1, -1, true)
}) })
state = nil state = nil
src = "\x1b[1mhello \x1b[Kworld" src = "\x1b[1mhello \x1b[Kworld"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
assert(offsets[0], 0, 11, -1, -1, true) assert((*offsets)[0], 0, 11, -1, -1, true)
}) })
state = nil state = nil
src = "hello \x1b[34;45;1mworld" src = "hello \x1b[34;45;1mworld"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
assert(offsets[0], 6, 11, 4, 5, true) assert((*offsets)[0], 6, 11, 4, 5, true)
}) })
state = nil state = nil
src = "hello \x1b[34;45;1mwor\x1b[34;45;1mld" src = "hello \x1b[34;45;1mwor\x1b[34;45;1mld"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
assert(offsets[0], 6, 11, 4, 5, true) assert((*offsets)[0], 6, 11, 4, 5, true)
}) })
state = nil state = nil
src = "hello \x1b[34;45;1mwor\x1b[0mld" src = "hello \x1b[34;45;1mwor\x1b[0mld"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
assert(offsets[0], 6, 9, 4, 5, true) assert((*offsets)[0], 6, 9, 4, 5, true)
}) })
state = nil state = nil
src = "hello \x1b[34;48;5;233;1mwo\x1b[38;5;161mr\x1b[0ml\x1b[38;5;161md" src = "hello \x1b[34;48;5;233;1mwo\x1b[38;5;161mr\x1b[0ml\x1b[38;5;161md"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 3 { if len(*offsets) != 3 {
t.Fail() t.Fail()
} }
assert(offsets[0], 6, 8, 4, 233, true) assert((*offsets)[0], 6, 8, 4, 233, true)
assert(offsets[1], 8, 9, 161, 233, true) assert((*offsets)[1], 8, 9, 161, 233, true)
assert(offsets[2], 10, 11, 161, -1, false) assert((*offsets)[2], 10, 11, 161, -1, false)
}) })
// {38,48};5;{38,48} // {38,48};5;{38,48}
state = nil state = nil
src = "hello \x1b[38;5;38;48;5;48;1mwor\x1b[38;5;48;48;5;38ml\x1b[0md" src = "hello \x1b[38;5;38;48;5;48;1mwor\x1b[38;5;48;48;5;38ml\x1b[0md"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 2 { if len(*offsets) != 2 {
t.Fail() t.Fail()
} }
assert(offsets[0], 6, 9, 38, 48, true) assert((*offsets)[0], 6, 9, 38, 48, true)
assert(offsets[1], 9, 10, 48, 38, true) assert((*offsets)[1], 9, 10, 48, 38, true)
}) })
src = "hello \x1b[32;1mworld" src = "hello \x1b[32;1mworld"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
if state.fg != 2 || state.bg != -1 || !state.bold { if state.fg != 2 || state.bg != -1 || !state.bold {
t.Fail() t.Fail()
} }
assert(offsets[0], 6, 11, 2, -1, true) assert((*offsets)[0], 6, 11, 2, -1, true)
}) })
src = "hello world" src = "hello world"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 1 { if len(*offsets) != 1 {
t.Fail() t.Fail()
} }
if state.fg != 2 || state.bg != -1 || !state.bold { if state.fg != 2 || state.bg != -1 || !state.bold {
t.Fail() t.Fail()
} }
assert(offsets[0], 0, 11, 2, -1, true) assert((*offsets)[0], 0, 11, 2, -1, true)
}) })
src = "hello \x1b[0;38;5;200;48;5;100mworld" src = "hello \x1b[0;38;5;200;48;5;100mworld"
check(func(offsets []ansiOffset, state *ansiState) { check(func(offsets *[]ansiOffset, state *ansiState) {
if len(offsets) != 2 { if len(*offsets) != 2 {
t.Fail() t.Fail()
} }
if state.fg != 200 || state.bg != 100 || state.bold { if state.fg != 200 || state.bg != 100 || state.bold {
t.Fail() t.Fail()
} }
assert(offsets[0], 0, 6, 2, -1, true) assert((*offsets)[0], 0, 6, 2, -1, true)
assert(offsets[1], 6, 11, 200, 100, false) assert((*offsets)[1], 6, 11, 200, 100, false)
}) })
} }

View File

@@ -3,7 +3,7 @@ package fzf
import "sync" import "sync"
// queryCache associates strings to lists of items // queryCache associates strings to lists of items
type queryCache map[string][]*Item type queryCache map[string][]*Result
// ChunkCache associates Chunk and query string to lists of items // ChunkCache associates Chunk and query string to lists of items
type ChunkCache struct { type ChunkCache struct {
@@ -17,7 +17,7 @@ func NewChunkCache() ChunkCache {
} }
// Add adds the list to the cache // Add adds the list to the cache
func (cc *ChunkCache) Add(chunk *Chunk, key string, list []*Item) { func (cc *ChunkCache) Add(chunk *Chunk, key string, list []*Result) {
if len(key) == 0 || !chunk.IsFull() || len(list) > queryCacheMax { if len(key) == 0 || !chunk.IsFull() || len(list) > queryCacheMax {
return return
} }
@@ -34,7 +34,7 @@ func (cc *ChunkCache) Add(chunk *Chunk, key string, list []*Item) {
} }
// Find is called to lookup ChunkCache // Find is called to lookup ChunkCache
func (cc *ChunkCache) Find(chunk *Chunk, key string) ([]*Item, bool) { func (cc *ChunkCache) Find(chunk *Chunk, key string) ([]*Result, bool) {
if len(key) == 0 || !chunk.IsFull() { if len(key) == 0 || !chunk.IsFull() {
return nil, false return nil, false
} }

View File

@@ -7,8 +7,8 @@ func TestChunkCache(t *testing.T) {
chunk2 := make(Chunk, chunkSize) chunk2 := make(Chunk, chunkSize)
chunk1p := &Chunk{} chunk1p := &Chunk{}
chunk2p := &chunk2 chunk2p := &chunk2
items1 := []*Item{&Item{}} items1 := []*Result{&Result{}}
items2 := []*Item{&Item{}, &Item{}} items2 := []*Result{&Result{}, &Result{}}
cache.Add(chunk1p, "foo", items1) cache.Add(chunk1p, "foo", items1)
cache.Add(chunk2p, "foo", items1) cache.Add(chunk2p, "foo", items1)
cache.Add(chunk2p, "bar", items2) cache.Add(chunk2p, "bar", items2)

View File

@@ -9,10 +9,10 @@ import (
func TestChunkList(t *testing.T) { func TestChunkList(t *testing.T) {
// FIXME global // FIXME global
sortCriteria = []criterion{byMatchLen, byLength} sortCriteria = []criterion{byScore, byLength}
cl := NewChunkList(func(s []byte, i int) *Item { cl := NewChunkList(func(s []byte, i int) *Item {
return &Item{text: util.ToChars(s), rank: buildEmptyRank(int32(i * 2))} return &Item{text: util.ToChars(s), index: int32(i * 2)}
}) })
// Snapshot // Snapshot
@@ -41,11 +41,8 @@ func TestChunkList(t *testing.T) {
if len(*chunk1) != 2 { if len(*chunk1) != 2 {
t.Error("Snapshot should contain only two items") t.Error("Snapshot should contain only two items")
} }
last := func(arr [5]int32) int32 { if (*chunk1)[0].text.ToString() != "hello" || (*chunk1)[0].index != 0 ||
return arr[len(arr)-1] (*chunk1)[1].text.ToString() != "world" || (*chunk1)[1].index != 2 {
}
if (*chunk1)[0].text.ToString() != "hello" || last((*chunk1)[0].rank) != 0 ||
(*chunk1)[1].text.ToString() != "world" || last((*chunk1)[1].rank) != 2 {
t.Error("Invalid data") t.Error("Invalid data")
} }
if chunk1.IsFull() { if chunk1.IsFull() {

View File

@@ -8,26 +8,34 @@ import (
const ( const (
// Current version // Current version
version = "0.13.4" version = "0.15.1"
// Core // Core
coordinatorDelayMax time.Duration = 100 * time.Millisecond coordinatorDelayMax time.Duration = 100 * time.Millisecond
coordinatorDelayStep time.Duration = 10 * time.Millisecond coordinatorDelayStep time.Duration = 10 * time.Millisecond
// Reader // Reader
defaultCommand = `find . -path '*/\.*' -prune -o -type f -print -o -type l -print 2> /dev/null | sed s/^..//` defaultCommand = `find . -path '*/\.*' -prune -o -type f -print -o -type l -print 2> /dev/null | sed s/^..//`
readerBufferSize = 64 * 1024
// Terminal // Terminal
initialDelay = 20 * time.Millisecond initialDelay = 20 * time.Millisecond
initialDelayTac = 100 * time.Millisecond initialDelayTac = 100 * time.Millisecond
spinnerDuration = 200 * time.Millisecond spinnerDuration = 200 * time.Millisecond
maxPatternLength = 100
// Matcher // Matcher
progressMinDuration = 200 * time.Millisecond numPartitionsMultiplier = 8
maxPartitions = 32
progressMinDuration = 200 * time.Millisecond
// Capacity of each chunk // Capacity of each chunk
chunkSize int = 100 chunkSize int = 100
// Pre-allocated memory slices to minimize GC
slab16Size int = 100 * 1024 // 200KB * 32 = 12.8MB
slab32Size int = 2048 // 8KB * 32 = 256KB
// Do not cache results of low selectivity queries // Do not cache results of low selectivity queries
queryCacheMax int = chunkSize / 5 queryCacheMax int = chunkSize / 5

View File

@@ -28,16 +28,11 @@ package fzf
import ( import (
"fmt" "fmt"
"os" "os"
"runtime"
"time" "time"
"github.com/junegunn/fzf/src/util" "github.com/junegunn/fzf/src/util"
) )
func initProcs() {
runtime.GOMAXPROCS(runtime.NumCPU())
}
/* /*
Reader -> EvtReadFin Reader -> EvtReadFin
Reader -> EvtReadNew -> Matcher (restart) Reader -> EvtReadNew -> Matcher (restart)
@@ -49,8 +44,6 @@ Matcher -> EvtHeader -> Terminal (update header)
// Run starts fzf // Run starts fzf
func Run(opts *Options) { func Run(opts *Options) {
initProcs()
sort := opts.Sort > 0 sort := opts.Sort > 0
sortCriteria = opts.Criteria sortCriteria = opts.Criteria
@@ -63,16 +56,16 @@ func Run(opts *Options) {
eventBox := util.NewEventBox() eventBox := util.NewEventBox()
// ANSI code processor // ANSI code processor
ansiProcessor := func(data []byte) (util.Chars, []ansiOffset) { ansiProcessor := func(data []byte) (util.Chars, *[]ansiOffset) {
return util.ToChars(data), nil return util.ToChars(data), nil
} }
ansiProcessorRunes := func(data []rune) (util.Chars, []ansiOffset) { ansiProcessorRunes := func(data []rune) (util.Chars, *[]ansiOffset) {
return util.RunesToChars(data), nil return util.RunesToChars(data), nil
} }
if opts.Ansi { if opts.Ansi {
if opts.Theme != nil { if opts.Theme != nil {
var state *ansiState var state *ansiState
ansiProcessor = func(data []byte) (util.Chars, []ansiOffset) { ansiProcessor = func(data []byte) (util.Chars, *[]ansiOffset) {
trimmed, offsets, newState := extractColor(string(data), state, nil) trimmed, offsets, newState := extractColor(string(data), state, nil)
state = newState state = newState
return util.RunesToChars([]rune(trimmed)), offsets return util.RunesToChars([]rune(trimmed)), offsets
@@ -80,12 +73,12 @@ func Run(opts *Options) {
} else { } else {
// When color is disabled but ansi option is given, // When color is disabled but ansi option is given,
// we simply strip out ANSI codes from the input // we simply strip out ANSI codes from the input
ansiProcessor = func(data []byte) (util.Chars, []ansiOffset) { ansiProcessor = func(data []byte) (util.Chars, *[]ansiOffset) {
trimmed, _, _ := extractColor(string(data), nil, nil) trimmed, _, _ := extractColor(string(data), nil, nil)
return util.RunesToChars([]rune(trimmed)), nil return util.RunesToChars([]rune(trimmed)), nil
} }
} }
ansiProcessorRunes = func(data []rune) (util.Chars, []ansiOffset) { ansiProcessorRunes = func(data []rune) (util.Chars, *[]ansiOffset) {
return ansiProcessor([]byte(string(data))) return ansiProcessor([]byte(string(data)))
} }
} }
@@ -102,14 +95,13 @@ func Run(opts *Options) {
} }
chars, colors := ansiProcessor(data) chars, colors := ansiProcessor(data)
return &Item{ return &Item{
index: int32(index),
text: chars, text: chars,
colors: colors, colors: colors}
rank: buildEmptyRank(int32(index))}
}) })
} else { } else {
chunkList = NewChunkList(func(data []byte, index int) *Item { chunkList = NewChunkList(func(data []byte, index int) *Item {
chars := util.ToChars(data) tokens := Tokenize(util.ToChars(data), opts.Delimiter)
tokens := Tokenize(chars, opts.Delimiter)
trans := Transform(tokens, opts.WithNth) trans := Transform(tokens, opts.WithNth)
if len(header) < opts.HeaderLines { if len(header) < opts.HeaderLines {
header = append(header, string(joinTokens(trans))) header = append(header, string(joinTokens(trans)))
@@ -118,10 +110,9 @@ func Run(opts *Options) {
} }
textRunes := joinTokens(trans) textRunes := joinTokens(trans)
item := Item{ item := Item{
text: util.RunesToChars(textRunes), index: int32(index),
origText: &data, origText: &data,
colors: nil, colors: nil}
rank: buildEmptyRank(int32(index))}
trimmed, colors := ansiProcessorRunes(textRunes) trimmed, colors := ansiProcessorRunes(textRunes)
item.text = trimmed item.text = trimmed
@@ -152,27 +143,30 @@ func Run(opts *Options) {
} }
patternBuilder := func(runes []rune) *Pattern { patternBuilder := func(runes []rune) *Pattern {
return BuildPattern( return BuildPattern(
opts.Fuzzy, opts.Extended, opts.Case, forward, opts.Fuzzy, opts.FuzzyAlgo, opts.Extended, opts.Case, forward,
opts.Nth, opts.Delimiter, runes) opts.Filter == nil, opts.Nth, opts.Delimiter, runes)
} }
matcher := NewMatcher(patternBuilder, sort, opts.Tac, eventBox) matcher := NewMatcher(patternBuilder, sort, opts.Tac, eventBox)
// Filtering mode // Filtering mode
if opts.Filter != nil { if opts.Filter != nil {
if opts.PrintQuery { if opts.PrintQuery {
fmt.Println(*opts.Filter) opts.Printer(*opts.Filter)
} }
pattern := patternBuilder([]rune(*opts.Filter)) pattern := patternBuilder([]rune(*opts.Filter))
found := false found := false
if streamingFilter { if streamingFilter {
slab := util.MakeSlab(slab16Size, slab32Size)
reader := Reader{ reader := Reader{
func(runes []byte) bool { func(runes []byte) bool {
item := chunkList.trans(runes, 0) item := chunkList.trans(runes, 0)
if item != nil && pattern.MatchItem(item) { if item != nil {
fmt.Println(item.text.ToString()) if result, _, _ := pattern.MatchItem(item, false, slab); result != nil {
found = true opts.Printer(item.text.ToString())
found = true
}
} }
return false return false
}, eventBox, opts.ReadZero} }, eventBox, opts.ReadZero}
@@ -186,7 +180,7 @@ func Run(opts *Options) {
chunks: snapshot, chunks: snapshot,
pattern: pattern}) pattern: pattern})
for i := 0; i < merger.Length(); i++ { for i := 0; i < merger.Length(); i++ {
fmt.Println(merger.Get(i).AsString(opts.Ansi)) opts.Printer(merger.Get(i).item.AsString(opts.Ansi))
found = true found = true
} }
} }
@@ -260,13 +254,13 @@ func Run(opts *Options) {
} else if val.final { } else if val.final {
if opts.Exit0 && count == 0 || opts.Select1 && count == 1 { if opts.Exit0 && count == 0 || opts.Select1 && count == 1 {
if opts.PrintQuery { if opts.PrintQuery {
fmt.Println(opts.Query) opts.Printer(opts.Query)
} }
if len(opts.Expect) > 0 { if len(opts.Expect) > 0 {
fmt.Println() opts.Printer("")
} }
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
fmt.Println(val.Get(i).AsString(opts.Ansi)) opts.Printer(val.Get(i).item.AsString(opts.Ansi))
} }
if count > 0 { if count > 0 {
os.Exit(exitOk) os.Exit(exitOk)

View File

@@ -1,295 +1,39 @@
package fzf package fzf
import ( import (
"math"
"github.com/junegunn/fzf/src/curses"
"github.com/junegunn/fzf/src/util" "github.com/junegunn/fzf/src/util"
) )
// Offset holds three 32-bit integers denoting the offsets of a matched substring
type Offset [3]int32
type colorOffset struct {
offset [2]int32
color int
bold bool
}
// Item represents each input line // Item represents each input line
type Item struct { type Item struct {
index int32
text util.Chars text util.Chars
origText *[]byte origText *[]byte
colors *[]ansiOffset
transformed []Token transformed []Token
offsets []Offset
colors []ansiOffset
rank [5]int32
bonus int32
}
// Sort criteria to use. Never changes once fzf is started.
var sortCriteria []criterion
func isRankValid(rank [5]int32) bool {
// Exclude ordinal index
for _, r := range rank[:4] {
if r > 0 {
return true
}
}
return false
}
func buildEmptyRank(index int32) [5]int32 {
return [5]int32{0, 0, 0, 0, index}
} }
// Index returns ordinal index of the Item // Index returns ordinal index of the Item
func (item *Item) Index() int32 { func (item *Item) Index() int32 {
return item.rank[4] return item.index
} }
// Rank calculates rank of the Item // Colors returns ansiOffsets of the Item
func (item *Item) Rank(cache bool) [5]int32 { func (item *Item) Colors() []ansiOffset {
if cache && isRankValid(item.rank) { if item.colors == nil {
return item.rank return []ansiOffset{}
} }
matchlen := 0 return *item.colors
prevEnd := 0
lenSum := 0
minBegin := math.MaxInt32
for _, offset := range item.offsets {
begin := int(offset[0])
end := int(offset[1])
trimLen := int(offset[2])
lenSum += trimLen
if prevEnd > begin {
begin = prevEnd
}
if end > prevEnd {
prevEnd = end
}
if end > begin {
if begin < minBegin {
minBegin = begin
}
matchlen += end - begin
}
}
rank := buildEmptyRank(item.Index())
for idx, criterion := range sortCriteria {
var val int32
switch criterion {
case byMatchLen:
if matchlen == 0 {
val = math.MaxInt32
} else {
// It is extremely unlikely that bonus exceeds 128
val = 128*int32(matchlen) - item.bonus
}
case byLength:
// It is guaranteed that .transformed in not null in normal execution
if item.transformed != nil {
// If offsets is empty, lenSum will be 0, but we don't care
val = int32(lenSum)
} else {
val = int32(item.text.Length())
}
case byBegin:
// We can't just look at item.offsets[0][0] because it can be an inverse term
whitePrefixLen := 0
numChars := item.text.Length()
for idx := 0; idx < numChars; idx++ {
r := item.text.Get(idx)
whitePrefixLen = idx
if idx == minBegin || r != ' ' && r != '\t' {
break
}
}
val = int32(minBegin - whitePrefixLen)
case byEnd:
if prevEnd > 0 {
val = int32(1 + item.text.Length() - prevEnd)
} else {
// Empty offsets due to inverse terms.
val = 1
}
}
rank[idx] = val
}
if cache {
item.rank = rank
}
return rank
} }
// AsString returns the original string // AsString returns the original string
func (item *Item) AsString(stripAnsi bool) string { func (item *Item) AsString(stripAnsi bool) string {
return *item.StringPtr(stripAnsi)
}
// StringPtr returns the pointer to the original string
func (item *Item) StringPtr(stripAnsi bool) *string {
if item.origText != nil { if item.origText != nil {
if stripAnsi { if stripAnsi {
trimmed, _, _ := extractColor(string(*item.origText), nil, nil) trimmed, _, _ := extractColor(string(*item.origText), nil, nil)
return &trimmed return trimmed
} }
orig := string(*item.origText) return string(*item.origText)
return &orig
} }
str := item.text.ToString() return item.text.ToString()
return &str
}
func (item *Item) colorOffsets(color int, bold bool, current bool) []colorOffset {
if len(item.colors) == 0 {
var offsets []colorOffset
for _, off := range item.offsets {
offsets = append(offsets, colorOffset{offset: [2]int32{off[0], off[1]}, color: color, bold: bold})
}
return offsets
}
// Find max column
var maxCol int32
for _, off := range item.offsets {
if off[1] > maxCol {
maxCol = off[1]
}
}
for _, ansi := range item.colors {
if ansi.offset[1] > maxCol {
maxCol = ansi.offset[1]
}
}
cols := make([]int, maxCol)
for colorIndex, ansi := range item.colors {
for i := ansi.offset[0]; i < ansi.offset[1]; i++ {
cols[i] = colorIndex + 1 // XXX
}
}
for _, off := range item.offsets {
for i := off[0]; i < off[1]; i++ {
cols[i] = -1
}
}
// sort.Sort(ByOrder(offsets))
// Merge offsets
// ------------ ---- -- ----
// ++++++++ ++++++++++
// --++++++++-- --++++++++++---
curr := 0
start := 0
var offsets []colorOffset
add := func(idx int) {
if curr != 0 && idx > start {
if curr == -1 {
offsets = append(offsets, colorOffset{
offset: [2]int32{int32(start), int32(idx)}, color: color, bold: bold})
} else {
ansi := item.colors[curr-1]
fg := ansi.color.fg
if fg == -1 {
if current {
fg = curses.CurrentFG
} else {
fg = curses.FG
}
}
bg := ansi.color.bg
if bg == -1 {
if current {
bg = curses.DarkBG
} else {
bg = curses.BG
}
}
offsets = append(offsets, colorOffset{
offset: [2]int32{int32(start), int32(idx)},
color: curses.PairFor(fg, bg),
bold: ansi.color.bold || bold})
}
}
}
for idx, col := range cols {
if col != curr {
add(idx)
start = idx
curr = col
}
}
add(int(maxCol))
return offsets
}
// ByOrder is for sorting substring offsets
type ByOrder []Offset
func (a ByOrder) Len() int {
return len(a)
}
func (a ByOrder) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a ByOrder) Less(i, j int) bool {
ioff := a[i]
joff := a[j]
return (ioff[0] < joff[0]) || (ioff[0] == joff[0]) && (ioff[1] <= joff[1])
}
// ByRelevance is for sorting Items
type ByRelevance []*Item
func (a ByRelevance) Len() int {
return len(a)
}
func (a ByRelevance) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a ByRelevance) Less(i, j int) bool {
irank := a[i].Rank(true)
jrank := a[j].Rank(true)
return compareRanks(irank, jrank, false)
}
// ByRelevanceTac is for sorting Items
type ByRelevanceTac []*Item
func (a ByRelevanceTac) Len() int {
return len(a)
}
func (a ByRelevanceTac) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a ByRelevanceTac) Less(i, j int) bool {
irank := a[i].Rank(true)
jrank := a[j].Rank(true)
return compareRanks(irank, jrank, true)
}
func compareRanks(irank [5]int32, jrank [5]int32, tac bool) bool {
for idx := 0; idx < 4; idx++ {
left := irank[idx]
right := jrank[idx]
if left < right {
return true
} else if left > right {
return false
}
}
return (irank[4] <= jrank[4]) != tac
} }

View File

@@ -1,109 +1,23 @@
package fzf package fzf
import ( import (
"math"
"sort"
"testing" "testing"
"github.com/junegunn/fzf/src/curses"
"github.com/junegunn/fzf/src/util" "github.com/junegunn/fzf/src/util"
) )
func TestOffsetSort(t *testing.T) { func TestStringPtr(t *testing.T) {
offsets := []Offset{ orig := []byte("\x1b[34mfoo")
Offset{3, 5}, Offset{2, 7}, text := []byte("\x1b[34mbar")
Offset{1, 3}, Offset{2, 9}} item := Item{origText: &orig, text: util.ToChars(text)}
sort.Sort(ByOrder(offsets)) if item.AsString(true) != "foo" || item.AsString(false) != string(orig) {
t.Fail()
if offsets[0][0] != 1 || offsets[0][1] != 3 || }
offsets[1][0] != 2 || offsets[1][1] != 7 || if item.AsString(true) != "foo" {
offsets[2][0] != 2 || offsets[2][1] != 9 || t.Fail()
offsets[3][0] != 3 || offsets[3][1] != 5 { }
t.Error("Invalid order:", offsets) item.origText = nil
if item.AsString(true) != string(text) || item.AsString(false) != string(text) {
t.Fail()
} }
} }
func TestRankComparison(t *testing.T) {
if compareRanks([5]int32{3, 0, 0, 0, 5}, [5]int32{2, 0, 0, 0, 7}, false) ||
!compareRanks([5]int32{3, 0, 0, 0, 5}, [5]int32{3, 0, 0, 0, 6}, false) ||
!compareRanks([5]int32{1, 2, 0, 0, 3}, [5]int32{1, 3, 0, 0, 2}, false) ||
!compareRanks([5]int32{0, 0, 0, 0, 0}, [5]int32{0, 0, 0, 0, 0}, false) {
t.Error("Invalid order")
}
if compareRanks([5]int32{3, 0, 0, 0, 5}, [5]int32{2, 0, 0, 0, 7}, true) ||
!compareRanks([5]int32{3, 0, 0, 0, 5}, [5]int32{3, 0, 0, 0, 6}, false) ||
!compareRanks([5]int32{1, 2, 0, 0, 3}, [5]int32{1, 3, 0, 0, 2}, true) ||
!compareRanks([5]int32{0, 0, 0, 0, 0}, [5]int32{0, 0, 0, 0, 0}, false) {
t.Error("Invalid order (tac)")
}
}
// Match length, string length, index
func TestItemRank(t *testing.T) {
// FIXME global
sortCriteria = []criterion{byMatchLen, byLength}
strs := [][]rune{[]rune("foo"), []rune("foobar"), []rune("bar"), []rune("baz")}
item1 := Item{text: util.RunesToChars(strs[0]), offsets: []Offset{}, rank: [5]int32{0, 0, 0, 0, 1}}
rank1 := item1.Rank(true)
if rank1[0] != math.MaxInt32 || rank1[1] != 3 || rank1[4] != 1 {
t.Error(item1.Rank(true))
}
// Only differ in index
item2 := Item{text: util.RunesToChars(strs[0]), offsets: []Offset{}}
items := []*Item{&item1, &item2}
sort.Sort(ByRelevance(items))
if items[0] != &item2 || items[1] != &item1 {
t.Error(items)
}
items = []*Item{&item2, &item1, &item1, &item2}
sort.Sort(ByRelevance(items))
if items[0] != &item2 || items[1] != &item2 ||
items[2] != &item1 || items[3] != &item1 {
t.Error(items)
}
// Sort by relevance
item3 := Item{text: util.RunesToChars(strs[1]), rank: [5]int32{0, 0, 0, 0, 2}, offsets: []Offset{Offset{1, 3}, Offset{5, 7}}}
item4 := Item{text: util.RunesToChars(strs[1]), rank: [5]int32{0, 0, 0, 0, 2}, offsets: []Offset{Offset{1, 2}, Offset{6, 7}}}
item5 := Item{text: util.RunesToChars(strs[2]), rank: [5]int32{0, 0, 0, 0, 2}, offsets: []Offset{Offset{1, 3}, Offset{5, 7}}}
item6 := Item{text: util.RunesToChars(strs[2]), rank: [5]int32{0, 0, 0, 0, 2}, offsets: []Offset{Offset{1, 2}, Offset{6, 7}}}
items = []*Item{&item1, &item2, &item3, &item4, &item5, &item6}
sort.Sort(ByRelevance(items))
if items[0] != &item6 || items[1] != &item4 ||
items[2] != &item5 || items[3] != &item3 ||
items[4] != &item2 || items[5] != &item1 {
t.Error(items)
}
}
func TestColorOffset(t *testing.T) {
// ------------ 20 ---- -- ----
// ++++++++ ++++++++++
// --++++++++-- --++++++++++---
item := Item{
offsets: []Offset{Offset{5, 15}, Offset{25, 35}},
colors: []ansiOffset{
ansiOffset{[2]int32{0, 20}, ansiState{1, 5, false}},
ansiOffset{[2]int32{22, 27}, ansiState{2, 6, true}},
ansiOffset{[2]int32{30, 32}, ansiState{3, 7, false}},
ansiOffset{[2]int32{33, 40}, ansiState{4, 8, true}}}}
// [{[0 5] 9 false} {[5 15] 99 false} {[15 20] 9 false} {[22 25] 10 true} {[25 35] 99 false} {[35 40] 11 true}]
offsets := item.colorOffsets(99, false, true)
assert := func(idx int, b int32, e int32, c int, bold bool) {
o := offsets[idx]
if o.offset[0] != b || o.offset[1] != e || o.color != c || o.bold != bold {
t.Error(o)
}
}
assert(0, 0, 5, curses.ColUser, false)
assert(1, 5, 15, 99, false)
assert(2, 15, 20, curses.ColUser, false)
assert(3, 22, 25, curses.ColUser+1, true)
assert(4, 25, 35, 99, false)
assert(5, 35, 40, curses.ColUser+2, true)
}

View File

@@ -26,6 +26,7 @@ type Matcher struct {
eventBox *util.EventBox eventBox *util.EventBox
reqBox *util.EventBox reqBox *util.EventBox
partitions int partitions int
slab []*util.Slab
mergerCache map[string]*Merger mergerCache map[string]*Merger
} }
@@ -37,13 +38,15 @@ const (
// NewMatcher returns a new Matcher // NewMatcher returns a new Matcher
func NewMatcher(patternBuilder func([]rune) *Pattern, func NewMatcher(patternBuilder func([]rune) *Pattern,
sort bool, tac bool, eventBox *util.EventBox) *Matcher { sort bool, tac bool, eventBox *util.EventBox) *Matcher {
partitions := util.Min(numPartitionsMultiplier*runtime.NumCPU(), maxPartitions)
return &Matcher{ return &Matcher{
patternBuilder: patternBuilder, patternBuilder: patternBuilder,
sort: sort, sort: sort,
tac: tac, tac: tac,
eventBox: eventBox, eventBox: eventBox,
reqBox: util.NewEventBox(), reqBox: util.NewEventBox(),
partitions: runtime.NumCPU(), partitions: partitions,
slab: make([]*util.Slab, partitions),
mergerCache: make(map[string]*Merger)} mergerCache: make(map[string]*Merger)}
} }
@@ -106,18 +109,19 @@ func (m *Matcher) Loop() {
} }
func (m *Matcher) sliceChunks(chunks []*Chunk) [][]*Chunk { func (m *Matcher) sliceChunks(chunks []*Chunk) [][]*Chunk {
perSlice := len(chunks) / m.partitions partitions := m.partitions
perSlice := len(chunks) / partitions
// No need to parallelize
if perSlice == 0 { if perSlice == 0 {
return [][]*Chunk{chunks} partitions = len(chunks)
perSlice = 1
} }
slices := make([][]*Chunk, m.partitions) slices := make([][]*Chunk, partitions)
for i := 0; i < m.partitions; i++ { for i := 0; i < partitions; i++ {
start := i * perSlice start := i * perSlice
end := start + perSlice end := start + perSlice
if i == m.partitions-1 { if i == partitions-1 {
end = len(chunks) end = len(chunks)
} }
slices[i] = chunks[start:end] slices[i] = chunks[start:end]
@@ -127,7 +131,7 @@ func (m *Matcher) sliceChunks(chunks []*Chunk) [][]*Chunk {
type partialResult struct { type partialResult struct {
index int index int
matches []*Item matches []*Result
} }
func (m *Matcher) scan(request MatchRequest) (*Merger, bool) { func (m *Matcher) scan(request MatchRequest) (*Merger, bool) {
@@ -152,17 +156,26 @@ func (m *Matcher) scan(request MatchRequest) (*Merger, bool) {
for idx, chunks := range slices { for idx, chunks := range slices {
waitGroup.Add(1) waitGroup.Add(1)
go func(idx int, chunks []*Chunk) { if m.slab[idx] == nil {
m.slab[idx] = util.MakeSlab(slab16Size, slab32Size)
}
go func(idx int, slab *util.Slab, chunks []*Chunk) {
defer func() { waitGroup.Done() }() defer func() { waitGroup.Done() }()
sliceMatches := []*Item{} count := 0
for _, chunk := range chunks { allMatches := make([][]*Result, len(chunks))
matches := request.pattern.Match(chunk) for idx, chunk := range chunks {
sliceMatches = append(sliceMatches, matches...) matches := request.pattern.Match(chunk, slab)
allMatches[idx] = matches
count += len(matches)
if cancelled.Get() { if cancelled.Get() {
return return
} }
countChan <- len(matches) countChan <- len(matches)
} }
sliceMatches := make([]*Result, 0, count)
for _, matches := range allMatches {
sliceMatches = append(sliceMatches, matches...)
}
if m.sort { if m.sort {
if m.tac { if m.tac {
sort.Sort(ByRelevanceTac(sliceMatches)) sort.Sort(ByRelevanceTac(sliceMatches))
@@ -171,7 +184,7 @@ func (m *Matcher) scan(request MatchRequest) (*Merger, bool) {
} }
} }
resultChan <- partialResult{idx, sliceMatches} resultChan <- partialResult{idx, sliceMatches}
}(idx, chunks) }(idx, m.slab[idx], chunks)
} }
wait := func() bool { wait := func() bool {
@@ -199,12 +212,12 @@ func (m *Matcher) scan(request MatchRequest) (*Merger, bool) {
} }
} }
partialResults := make([][]*Item, numSlices) partialResults := make([][]*Result, numSlices)
for _ = range slices { for _ = range slices {
partialResult := <-resultChan partialResult := <-resultChan
partialResults[partialResult.index] = partialResult.matches partialResults[partialResult.index] = partialResult.matches
} }
return NewMerger(partialResults, m.sort, m.tac), false return NewMerger(pattern, partialResults, m.sort, m.tac), false
} }
// Reset is called to interrupt/signal the ongoing search // Reset is called to interrupt/signal the ongoing search

View File

@@ -3,13 +3,14 @@ package fzf
import "fmt" import "fmt"
// EmptyMerger is a Merger with no data // EmptyMerger is a Merger with no data
var EmptyMerger = NewMerger([][]*Item{}, false, false) var EmptyMerger = NewMerger(nil, [][]*Result{}, false, false)
// Merger holds a set of locally sorted lists of items and provides the view of // Merger holds a set of locally sorted lists of items and provides the view of
// a single, globally-sorted list // a single, globally-sorted list
type Merger struct { type Merger struct {
lists [][]*Item pattern *Pattern
merged []*Item lists [][]*Result
merged []*Result
chunks *[]*Chunk chunks *[]*Chunk
cursors []int cursors []int
sorted bool sorted bool
@@ -22,9 +23,10 @@ type Merger struct {
// original order // original order
func PassMerger(chunks *[]*Chunk, tac bool) *Merger { func PassMerger(chunks *[]*Chunk, tac bool) *Merger {
mg := Merger{ mg := Merger{
chunks: chunks, pattern: nil,
tac: tac, chunks: chunks,
count: 0} tac: tac,
count: 0}
for _, chunk := range *mg.chunks { for _, chunk := range *mg.chunks {
mg.count += len(*chunk) mg.count += len(*chunk)
@@ -33,10 +35,11 @@ func PassMerger(chunks *[]*Chunk, tac bool) *Merger {
} }
// NewMerger returns a new Merger // NewMerger returns a new Merger
func NewMerger(lists [][]*Item, sorted bool, tac bool) *Merger { func NewMerger(pattern *Pattern, lists [][]*Result, sorted bool, tac bool) *Merger {
mg := Merger{ mg := Merger{
pattern: pattern,
lists: lists, lists: lists,
merged: []*Item{}, merged: []*Result{},
chunks: nil, chunks: nil,
cursors: make([]int, len(lists)), cursors: make([]int, len(lists)),
sorted: sorted, sorted: sorted,
@@ -55,14 +58,14 @@ func (mg *Merger) Length() int {
return mg.count return mg.count
} }
// Get returns the pointer to the Item object indexed by the given integer // Get returns the pointer to the Result object indexed by the given integer
func (mg *Merger) Get(idx int) *Item { func (mg *Merger) Get(idx int) *Result {
if mg.chunks != nil { if mg.chunks != nil {
if mg.tac { if mg.tac {
idx = mg.count - idx - 1 idx = mg.count - idx - 1
} }
chunk := (*mg.chunks)[idx/chunkSize] chunk := (*mg.chunks)[idx/chunkSize]
return (*chunk)[idx%chunkSize] return &Result{item: (*chunk)[idx%chunkSize]}
} }
if mg.sorted { if mg.sorted {
@@ -86,9 +89,9 @@ func (mg *Merger) cacheable() bool {
return mg.count < mergerCacheMax return mg.count < mergerCacheMax
} }
func (mg *Merger) mergedGet(idx int) *Item { func (mg *Merger) mergedGet(idx int) *Result {
for i := len(mg.merged); i <= idx; i++ { for i := len(mg.merged); i <= idx; i++ {
minRank := buildEmptyRank(0) minRank := minRank()
minIdx := -1 minIdx := -1
for listIdx, list := range mg.lists { for listIdx, list := range mg.lists {
cursor := mg.cursors[listIdx] cursor := mg.cursors[listIdx]
@@ -97,7 +100,7 @@ func (mg *Merger) mergedGet(idx int) *Item {
continue continue
} }
if cursor >= 0 { if cursor >= 0 {
rank := list[cursor].Rank(false) rank := list[cursor].rank
if minIdx < 0 || compareRanks(rank, minRank, mg.tac) { if minIdx < 0 || compareRanks(rank, minRank, mg.tac) {
minRank = rank minRank = rank
minIdx = listIdx minIdx = listIdx

View File

@@ -15,18 +15,11 @@ func assert(t *testing.T, cond bool, msg ...string) {
} }
} }
func randItem() *Item { func randResult() *Result {
str := fmt.Sprintf("%d", rand.Uint32()) str := fmt.Sprintf("%d", rand.Uint32())
offsets := make([]Offset, rand.Int()%3) return &Result{
for idx := range offsets { item: &Item{text: util.RunesToChars([]rune(str))},
sidx := int32(rand.Uint32() % 20) rank: rank{index: rand.Int31()}}
eidx := sidx + int32(rand.Uint32()%20)
offsets[idx] = Offset{sidx, eidx}
}
return &Item{
text: util.RunesToChars([]rune(str)),
rank: buildEmptyRank(rand.Int31()),
offsets: offsets}
} }
func TestEmptyMerger(t *testing.T) { func TestEmptyMerger(t *testing.T) {
@@ -36,23 +29,23 @@ func TestEmptyMerger(t *testing.T) {
assert(t, len(EmptyMerger.merged) == 0, "Invalid merged list") assert(t, len(EmptyMerger.merged) == 0, "Invalid merged list")
} }
func buildLists(partiallySorted bool) ([][]*Item, []*Item) { func buildLists(partiallySorted bool) ([][]*Result, []*Result) {
numLists := 4 numLists := 4
lists := make([][]*Item, numLists) lists := make([][]*Result, numLists)
cnt := 0 cnt := 0
for i := 0; i < numLists; i++ { for i := 0; i < numLists; i++ {
numItems := rand.Int() % 20 numResults := rand.Int() % 20
cnt += numItems cnt += numResults
lists[i] = make([]*Item, numItems) lists[i] = make([]*Result, numResults)
for j := 0; j < numItems; j++ { for j := 0; j < numResults; j++ {
item := randItem() item := randResult()
lists[i][j] = item lists[i][j] = item
} }
if partiallySorted { if partiallySorted {
sort.Sort(ByRelevance(lists[i])) sort.Sort(ByRelevance(lists[i]))
} }
} }
items := []*Item{} items := []*Result{}
for _, list := range lists { for _, list := range lists {
items = append(items, list...) items = append(items, list...)
} }
@@ -64,7 +57,7 @@ func TestMergerUnsorted(t *testing.T) {
cnt := len(items) cnt := len(items)
// Not sorted: same order // Not sorted: same order
mg := NewMerger(lists, false, false) mg := NewMerger(nil, lists, false, false)
assert(t, cnt == mg.Length(), "Invalid Length") assert(t, cnt == mg.Length(), "Invalid Length")
for i := 0; i < cnt; i++ { for i := 0; i < cnt; i++ {
assert(t, items[i] == mg.Get(i), "Invalid Get") assert(t, items[i] == mg.Get(i), "Invalid Get")
@@ -76,7 +69,7 @@ func TestMergerSorted(t *testing.T) {
cnt := len(items) cnt := len(items)
// Sorted sorted order // Sorted sorted order
mg := NewMerger(lists, true, false) mg := NewMerger(nil, lists, true, false)
assert(t, cnt == mg.Length(), "Invalid Length") assert(t, cnt == mg.Length(), "Invalid Length")
sort.Sort(ByRelevance(items)) sort.Sort(ByRelevance(items))
for i := 0; i < cnt; i++ { for i := 0; i < cnt; i++ {
@@ -86,7 +79,7 @@ func TestMergerSorted(t *testing.T) {
} }
// Inverse order // Inverse order
mg2 := NewMerger(lists, true, false) mg2 := NewMerger(nil, lists, true, false)
for i := cnt - 1; i >= 0; i-- { for i := cnt - 1; i >= 0; i-- {
if items[i] != mg2.Get(i) { if items[i] != mg2.Get(i) {
t.Error("Not sorted", items[i], mg2.Get(i)) t.Error("Not sorted", items[i], mg2.Get(i))

View File

@@ -8,6 +8,7 @@ import (
"strings" "strings"
"unicode/utf8" "unicode/utf8"
"github.com/junegunn/fzf/src/algo"
"github.com/junegunn/fzf/src/curses" "github.com/junegunn/fzf/src/curses"
"github.com/junegunn/go-shellwords" "github.com/junegunn/go-shellwords"
@@ -19,6 +20,7 @@ const usage = `usage: fzf [options]
-x, --extended Extended-search mode -x, --extended Extended-search mode
(enabled by default; +x or --no-extended to disable) (enabled by default; +x or --no-extended to disable)
-e, --exact Enable Exact-match -e, --exact Enable Exact-match
--algo=TYPE Fuzzy matching algorithm: [v1|v2] (default: v2)
-i Case-insensitive match (default: smart-case match) -i Case-insensitive match (default: smart-case match)
+i Case-sensitive match +i Case-sensitive match
-n, --nth=N[,..] Comma-separated list of field index expressions -n, --nth=N[,..] Comma-separated list of field index expressions
@@ -94,7 +96,7 @@ const (
type criterion int type criterion int
const ( const (
byMatchLen criterion = iota byScore criterion = iota
byLength byLength
byBegin byBegin
byEnd byEnd
@@ -128,6 +130,7 @@ type previewOpts struct {
// Options stores the values of command-line options // Options stores the values of command-line options
type Options struct { type Options struct {
Fuzzy bool Fuzzy bool
FuzzyAlgo algo.Algo
Extended bool Extended bool
Case Case Case Case
Nth []Range Nth []Range
@@ -159,6 +162,7 @@ type Options struct {
Preview previewOpts Preview previewOpts
PrintQuery bool PrintQuery bool
ReadZero bool ReadZero bool
Printer func(string)
Sync bool Sync bool
History *History History *History
Header []string Header []string
@@ -171,6 +175,7 @@ type Options struct {
func defaultOptions() *Options { func defaultOptions() *Options {
return &Options{ return &Options{
Fuzzy: true, Fuzzy: true,
FuzzyAlgo: algo.FuzzyMatchV2,
Extended: true, Extended: true,
Case: CaseSmart, Case: CaseSmart,
Nth: make([]Range, 0), Nth: make([]Range, 0),
@@ -178,7 +183,7 @@ func defaultOptions() *Options {
Delimiter: Delimiter{}, Delimiter: Delimiter{},
Sort: 1000, Sort: 1000,
Tac: false, Tac: false,
Criteria: []criterion{byMatchLen, byLength}, Criteria: []criterion{byScore, byLength},
Multi: false, Multi: false,
Ansi: false, Ansi: false,
Mouse: true, Mouse: true,
@@ -202,6 +207,7 @@ func defaultOptions() *Options {
Preview: previewOpts{"", posRight, sizeSpec{50, true}, false}, Preview: previewOpts{"", posRight, sizeSpec{50, true}, false},
PrintQuery: false, PrintQuery: false,
ReadZero: false, ReadZero: false,
Printer: func(str string) { fmt.Println(str) },
Sync: false, Sync: false,
History: nil, History: nil,
Header: make([]string, 0), Header: make([]string, 0),
@@ -321,6 +327,18 @@ func isAlphabet(char uint8) bool {
return char >= 'a' && char <= 'z' return char >= 'a' && char <= 'z'
} }
func parseAlgo(str string) algo.Algo {
switch str {
case "v1":
return algo.FuzzyMatchV1
case "v2":
return algo.FuzzyMatchV2
default:
errorExit("invalid algorithm (expected: v1 or v2)")
}
return algo.FuzzyMatchV2
}
func parseKeyChords(str string, message string) map[int]string { func parseKeyChords(str string, message string) map[int]string {
if len(str) == 0 { if len(str) == 0 {
errorExit(message) errorExit(message)
@@ -406,7 +424,7 @@ func parseKeyChords(str string, message string) map[int]string {
} }
func parseTiebreak(str string) []criterion { func parseTiebreak(str string) []criterion {
criteria := []criterion{byMatchLen} criteria := []criterion{byScore}
hasIndex := false hasIndex := false
hasLength := false hasLength := false
hasBegin := false hasBegin := false
@@ -833,6 +851,8 @@ func parseOptions(opts *Options, allArgs []string) {
case "-f", "--filter": case "-f", "--filter":
filter := nextString(allArgs, &i, "query string required") filter := nextString(allArgs, &i, "query string required")
opts.Filter = &filter opts.Filter = &filter
case "--algo":
opts.FuzzyAlgo = parseAlgo(nextString(allArgs, &i, "algorithm required (v1|v2)"))
case "--expect": case "--expect":
opts.Expect = parseKeyChords(nextString(allArgs, &i, "key names required"), "key names required") opts.Expect = parseKeyChords(nextString(allArgs, &i, "key names required"), "key names required")
case "--tiebreak": case "--tiebreak":
@@ -917,6 +937,10 @@ func parseOptions(opts *Options, allArgs []string) {
opts.ReadZero = true opts.ReadZero = true
case "--no-read0": case "--no-read0":
opts.ReadZero = false opts.ReadZero = false
case "--print0":
opts.Printer = func(str string) { fmt.Print(str, "\x00") }
case "--no-print0":
opts.Printer = func(str string) { fmt.Println(str) }
case "--print-query": case "--print-query":
opts.PrintQuery = true opts.PrintQuery = true
case "--no-print-query": case "--no-print-query":
@@ -961,7 +985,9 @@ func parseOptions(opts *Options, allArgs []string) {
case "--version": case "--version":
opts.Version = true opts.Version = true
default: default:
if match, value := optString(arg, "-q", "--query="); match { if match, value := optString(arg, "--algo="); match {
opts.FuzzyAlgo = parseAlgo(value)
} else if match, value := optString(arg, "-q", "--query="); match {
opts.Query = value opts.Query = value
} else if match, value := optString(arg, "-f", "--filter="); match { } else if match, value := optString(arg, "-f", "--filter="); match {
opts.Filter = &value opts.Filter = &value

View File

@@ -342,7 +342,7 @@ func TestDefaultCtrlNP(t *testing.T) {
check([]string{"--bind=ctrl-n:accept"}, curses.CtrlN, actAccept) check([]string{"--bind=ctrl-n:accept"}, curses.CtrlN, actAccept)
check([]string{"--bind=ctrl-p:accept"}, curses.CtrlP, actAccept) check([]string{"--bind=ctrl-p:accept"}, curses.CtrlP, actAccept)
hist := "--history=/tmp/foo" hist := "--history=/tmp/fzf-history"
check([]string{hist}, curses.CtrlN, actNextHistory) check([]string{hist}, curses.CtrlN, actNextHistory)
check([]string{hist}, curses.CtrlP, actPreviousHistory) check([]string{hist}, curses.CtrlP, actPreviousHistory)

View File

@@ -2,7 +2,6 @@ package fzf
import ( import (
"regexp" "regexp"
"sort"
"strings" "strings"
"github.com/junegunn/fzf/src/algo" "github.com/junegunn/fzf/src/algo"
@@ -41,6 +40,7 @@ type termSet []term
// Pattern represents search pattern // Pattern represents search pattern
type Pattern struct { type Pattern struct {
fuzzy bool fuzzy bool
fuzzyAlgo algo.Algo
extended bool extended bool
caseSensitive bool caseSensitive bool
forward bool forward bool
@@ -49,7 +49,7 @@ type Pattern struct {
cacheable bool cacheable bool
delimiter Delimiter delimiter Delimiter
nth []Range nth []Range
procFun map[termType]func(bool, bool, util.Chars, []rune) algo.Result procFun map[termType]algo.Algo
} }
var ( var (
@@ -75,8 +75,8 @@ func clearChunkCache() {
} }
// BuildPattern builds Pattern object from the given arguments // BuildPattern builds Pattern object from the given arguments
func BuildPattern(fuzzy bool, extended bool, caseMode Case, forward bool, func BuildPattern(fuzzy bool, fuzzyAlgo algo.Algo, extended bool, caseMode Case, forward bool,
nth []Range, delimiter Delimiter, runes []rune) *Pattern { cacheable bool, nth []Range, delimiter Delimiter, runes []rune) *Pattern {
var asString string var asString string
if extended { if extended {
@@ -90,7 +90,7 @@ func BuildPattern(fuzzy bool, extended bool, caseMode Case, forward bool,
return cached return cached
} }
caseSensitive, cacheable := true, true caseSensitive := true
termSets := []termSet{} termSets := []termSet{}
if extended { if extended {
@@ -100,7 +100,7 @@ func BuildPattern(fuzzy bool, extended bool, caseMode Case, forward bool,
for idx, term := range termSet { for idx, term := range termSet {
// If the query contains inverse search terms or OR operators, // If the query contains inverse search terms or OR operators,
// we cannot cache the search scope // we cannot cache the search scope
if idx > 0 || term.inv { if !cacheable || idx > 0 || term.inv {
cacheable = false cacheable = false
break Loop break Loop
} }
@@ -117,6 +117,7 @@ func BuildPattern(fuzzy bool, extended bool, caseMode Case, forward bool,
ptr := &Pattern{ ptr := &Pattern{
fuzzy: fuzzy, fuzzy: fuzzy,
fuzzyAlgo: fuzzyAlgo,
extended: extended, extended: extended,
caseSensitive: caseSensitive, caseSensitive: caseSensitive,
forward: forward, forward: forward,
@@ -125,9 +126,9 @@ func BuildPattern(fuzzy bool, extended bool, caseMode Case, forward bool,
cacheable: cacheable, cacheable: cacheable,
nth: nth, nth: nth,
delimiter: delimiter, delimiter: delimiter,
procFun: make(map[termType]func(bool, bool, util.Chars, []rune) algo.Result)} procFun: make(map[termType]algo.Algo)}
ptr.procFun[termFuzzy] = algo.FuzzyMatch ptr.procFun[termFuzzy] = fuzzyAlgo
ptr.procFun[termEqual] = algo.EqualMatch ptr.procFun[termEqual] = algo.EqualMatch
ptr.procFun[termExact] = algo.ExactMatchNaive ptr.procFun[termExact] = algo.ExactMatchNaive
ptr.procFun[termPrefix] = algo.PrefixMatch ptr.procFun[termPrefix] = algo.PrefixMatch
@@ -235,9 +236,7 @@ func (p *Pattern) CacheKey() string {
} }
// Match returns the list of matches Items in the given Chunk // Match returns the list of matches Items in the given Chunk
func (p *Pattern) Match(chunk *Chunk) []*Item { func (p *Pattern) Match(chunk *Chunk, slab *util.Slab) []*Result {
space := chunk
// ChunkCache: Exact match // ChunkCache: Exact match
cacheKey := p.CacheKey() cacheKey := p.CacheKey()
if p.cacheable { if p.cacheable {
@@ -246,7 +245,8 @@ func (p *Pattern) Match(chunk *Chunk) []*Item {
} }
} }
// ChunkCache: Prefix/suffix match // Prefix/suffix cache
var space []*Result
Loop: Loop:
for idx := 1; idx < len(cacheKey); idx++ { for idx := 1; idx < len(cacheKey); idx++ {
// [---------| ] | [ |---------] // [---------| ] | [ |---------]
@@ -256,14 +256,13 @@ Loop:
suffix := cacheKey[idx:] suffix := cacheKey[idx:]
for _, substr := range [2]*string{&prefix, &suffix} { for _, substr := range [2]*string{&prefix, &suffix} {
if cached, found := _cache.Find(chunk, *substr); found { if cached, found := _cache.Find(chunk, *substr); found {
cachedChunk := Chunk(cached) space = cached
space = &cachedChunk
break Loop break Loop
} }
} }
} }
matches := p.matchChunk(space) matches := p.matchChunk(chunk, space, slab)
if p.cacheable { if p.cacheable {
_cache.Add(chunk, cacheKey, matches) _cache.Add(chunk, cacheKey, matches)
@@ -271,20 +270,19 @@ Loop:
return matches return matches
} }
func (p *Pattern) matchChunk(chunk *Chunk) []*Item { func (p *Pattern) matchChunk(chunk *Chunk, space []*Result, slab *util.Slab) []*Result {
matches := []*Item{} matches := []*Result{}
if !p.extended {
if space == nil {
for _, item := range *chunk { for _, item := range *chunk {
offset, bonus := p.basicMatch(item) if match, _, _ := p.MatchItem(item, false, slab); match != nil {
if sidx := offset[0]; sidx >= 0 { matches = append(matches, match)
matches = append(matches,
dupItem(item, []Offset{offset}, bonus))
} }
} }
} else { } else {
for _, item := range *chunk { for _, result := range space {
if offsets, bonus := p.extendedMatch(item); len(offsets) == len(p.termSets) { if match, _, _ := p.MatchItem(result.item, false, slab); match != nil {
matches = append(matches, dupItem(item, offsets, bonus)) matches = append(matches, match)
} }
} }
} }
@@ -292,63 +290,75 @@ func (p *Pattern) matchChunk(chunk *Chunk) []*Item {
} }
// MatchItem returns true if the Item is a match // MatchItem returns true if the Item is a match
func (p *Pattern) MatchItem(item *Item) bool { func (p *Pattern) MatchItem(item *Item, withPos bool, slab *util.Slab) (*Result, []Offset, *[]int) {
if !p.extended { if p.extended {
offset, _ := p.basicMatch(item) if offsets, bonus, trimLen, pos := p.extendedMatch(item, withPos, slab); len(offsets) == len(p.termSets) {
sidx := offset[0] return buildResult(item, offsets, bonus, trimLen), offsets, pos
return sidx >= 0 }
return nil, nil, nil
} }
offsets, _ := p.extendedMatch(item) offset, bonus, trimLen, pos := p.basicMatch(item, withPos, slab)
return len(offsets) == len(p.termSets) if sidx := offset[0]; sidx >= 0 {
offsets := []Offset{offset}
return buildResult(item, offsets, bonus, trimLen), offsets, pos
}
return nil, nil, nil
} }
func dupItem(item *Item, offsets []Offset, bonus int32) *Item { func (p *Pattern) basicMatch(item *Item, withPos bool, slab *util.Slab) (Offset, int, int, *[]int) {
sort.Sort(ByOrder(offsets))
return &Item{
text: item.text,
origText: item.origText,
transformed: item.transformed,
offsets: offsets,
bonus: bonus,
colors: item.colors,
rank: buildEmptyRank(item.Index())}
}
func (p *Pattern) basicMatch(item *Item) (Offset, int32) {
input := p.prepareInput(item) input := p.prepareInput(item)
if p.fuzzy { if p.fuzzy {
return p.iter(algo.FuzzyMatch, input, p.caseSensitive, p.forward, p.text) return p.iter(p.fuzzyAlgo, input, p.caseSensitive, p.forward, p.text, withPos, slab)
} }
return p.iter(algo.ExactMatchNaive, input, p.caseSensitive, p.forward, p.text) return p.iter(algo.ExactMatchNaive, input, p.caseSensitive, p.forward, p.text, withPos, slab)
} }
func (p *Pattern) extendedMatch(item *Item) ([]Offset, int32) { func (p *Pattern) extendedMatch(item *Item, withPos bool, slab *util.Slab) ([]Offset, int, int, *[]int) {
input := p.prepareInput(item) input := p.prepareInput(item)
offsets := []Offset{} offsets := []Offset{}
var totalBonus int32 var totalScore int
var totalTrimLen int
var allPos *[]int
if withPos {
allPos = &[]int{}
}
for _, termSet := range p.termSets { for _, termSet := range p.termSets {
var offset *Offset var offset Offset
var bonus int32 var currentScore int
var trimLen int
matched := false
for _, term := range termSet { for _, term := range termSet {
pfun := p.procFun[term.typ] pfun := p.procFun[term.typ]
off, pen := p.iter(pfun, input, term.caseSensitive, p.forward, term.text) off, score, tLen, pos := p.iter(pfun, input, term.caseSensitive, p.forward, term.text, withPos, slab)
if sidx := off[0]; sidx >= 0 { if sidx := off[0]; sidx >= 0 {
if term.inv { if term.inv {
continue continue
} }
offset, bonus = &off, pen offset, currentScore, trimLen = off, score, tLen
matched = true
if withPos {
if pos != nil {
*allPos = append(*allPos, *pos...)
} else {
for idx := off[0]; idx < off[1]; idx++ {
*allPos = append(*allPos, int(idx))
}
}
}
break break
} else if term.inv { } else if term.inv {
offset, bonus = &Offset{0, 0, 0}, 0 offset, currentScore, trimLen = Offset{0, 0}, 0, 0
matched = true
continue continue
} }
} }
if offset != nil { if matched {
offsets = append(offsets, *offset) offsets = append(offsets, offset)
totalBonus += bonus totalScore += currentScore
totalTrimLen += trimLen
} }
} }
return offsets, totalBonus return offsets, totalScore, totalTrimLen, allPos
} }
func (p *Pattern) prepareInput(item *Item) []Token { func (p *Pattern) prepareInput(item *Item) []Token {
@@ -357,26 +367,28 @@ func (p *Pattern) prepareInput(item *Item) []Token {
} }
var ret []Token var ret []Token
if len(p.nth) > 0 { if len(p.nth) == 0 {
ret = []Token{Token{text: &item.text, prefixLength: 0, trimLength: int32(item.text.TrimLength())}}
} else {
tokens := Tokenize(item.text, p.delimiter) tokens := Tokenize(item.text, p.delimiter)
ret = Transform(tokens, p.nth) ret = Transform(tokens, p.nth)
} else {
ret = []Token{Token{text: item.text, prefixLength: 0, trimLength: item.text.TrimLength()}}
} }
item.transformed = ret item.transformed = ret
return ret return ret
} }
func (p *Pattern) iter(pfun func(bool, bool, util.Chars, []rune) algo.Result, func (p *Pattern) iter(pfun algo.Algo, tokens []Token, caseSensitive bool, forward bool, pattern []rune, withPos bool, slab *util.Slab) (Offset, int, int, *[]int) {
tokens []Token, caseSensitive bool, forward bool, pattern []rune) (Offset, int32) {
for _, part := range tokens { for _, part := range tokens {
prefixLength := int32(part.prefixLength) if res, pos := pfun(caseSensitive, forward, *part.text, pattern, withPos, slab); res.Start >= 0 {
if res := pfun(caseSensitive, forward, part.text, pattern); res.Start >= 0 { sidx := int32(res.Start) + part.prefixLength
sidx := res.Start + prefixLength eidx := int32(res.End) + part.prefixLength
eidx := res.End + prefixLength if pos != nil {
return Offset{sidx, eidx, int32(part.trimLength)}, res.Bonus for idx := range *pos {
(*pos)[idx] += int(part.prefixLength)
}
}
return Offset{sidx, eidx}, res.Score, int(part.trimLength), pos
} }
} }
// TODO: math.MaxUint16 return Offset{-1, -1}, 0, -1, nil
return Offset{-1, -1, -1}, 0.0
} }

View File

@@ -8,6 +8,12 @@ import (
"github.com/junegunn/fzf/src/util" "github.com/junegunn/fzf/src/util"
) )
var slab *util.Slab
func init() {
slab = util.MakeSlab(slab16Size, slab32Size)
}
func TestParseTermsExtended(t *testing.T) { func TestParseTermsExtended(t *testing.T) {
terms := parseTerms(true, CaseSmart, terms := parseTerms(true, CaseSmart,
"| aaa 'bbb ^ccc ddd$ !eee !'fff !^ggg !hhh$ | ^iii$ ^xxx | 'yyy | | zzz$ | !ZZZ |") "| aaa 'bbb ^ccc ddd$ !eee !'fff !^ggg !hhh$ | ^iii$ ^xxx | 'yyy | | zzz$ | !ZZZ |")
@@ -69,26 +75,32 @@ func TestParseTermsEmpty(t *testing.T) {
func TestExact(t *testing.T) { func TestExact(t *testing.T) {
defer clearPatternCache() defer clearPatternCache()
clearPatternCache() clearPatternCache()
pattern := BuildPattern(true, true, CaseSmart, true, pattern := BuildPattern(true, algo.FuzzyMatchV2, true, CaseSmart, true, true,
[]Range{}, Delimiter{}, []rune("'abc")) []Range{}, Delimiter{}, []rune("'abc"))
res := algo.ExactMatchNaive( res, pos := algo.ExactMatchNaive(
pattern.caseSensitive, pattern.forward, util.RunesToChars([]rune("aabbcc abc")), pattern.termSets[0][0].text) pattern.caseSensitive, pattern.forward, util.RunesToChars([]rune("aabbcc abc")), pattern.termSets[0][0].text, true, nil)
if res.Start != 7 || res.End != 10 { if res.Start != 7 || res.End != 10 {
t.Errorf("%s / %d / %d", pattern.termSets, res.Start, res.End) t.Errorf("%s / %d / %d", pattern.termSets, res.Start, res.End)
} }
if pos != nil {
t.Errorf("pos is expected to be nil")
}
} }
func TestEqual(t *testing.T) { func TestEqual(t *testing.T) {
defer clearPatternCache() defer clearPatternCache()
clearPatternCache() clearPatternCache()
pattern := BuildPattern(true, true, CaseSmart, true, []Range{}, Delimiter{}, []rune("^AbC$")) pattern := BuildPattern(true, algo.FuzzyMatchV2, true, CaseSmart, true, true, []Range{}, Delimiter{}, []rune("^AbC$"))
match := func(str string, sidxExpected int32, eidxExpected int32) { match := func(str string, sidxExpected int, eidxExpected int) {
res := algo.EqualMatch( res, pos := algo.EqualMatch(
pattern.caseSensitive, pattern.forward, util.RunesToChars([]rune(str)), pattern.termSets[0][0].text) pattern.caseSensitive, pattern.forward, util.RunesToChars([]rune(str)), pattern.termSets[0][0].text, true, nil)
if res.Start != sidxExpected || res.End != eidxExpected { if res.Start != sidxExpected || res.End != eidxExpected {
t.Errorf("%s / %d / %d", pattern.termSets, res.Start, res.End) t.Errorf("%s / %d / %d", pattern.termSets, res.Start, res.End)
} }
if pos != nil {
t.Errorf("pos is expected to be nil")
}
} }
match("ABC", -1, -1) match("ABC", -1, -1)
match("AbC", 0, 3) match("AbC", 0, 3)
@@ -97,17 +109,17 @@ func TestEqual(t *testing.T) {
func TestCaseSensitivity(t *testing.T) { func TestCaseSensitivity(t *testing.T) {
defer clearPatternCache() defer clearPatternCache()
clearPatternCache() clearPatternCache()
pat1 := BuildPattern(true, false, CaseSmart, true, []Range{}, Delimiter{}, []rune("abc")) pat1 := BuildPattern(true, algo.FuzzyMatchV2, false, CaseSmart, true, true, []Range{}, Delimiter{}, []rune("abc"))
clearPatternCache() clearPatternCache()
pat2 := BuildPattern(true, false, CaseSmart, true, []Range{}, Delimiter{}, []rune("Abc")) pat2 := BuildPattern(true, algo.FuzzyMatchV2, false, CaseSmart, true, true, []Range{}, Delimiter{}, []rune("Abc"))
clearPatternCache() clearPatternCache()
pat3 := BuildPattern(true, false, CaseIgnore, true, []Range{}, Delimiter{}, []rune("abc")) pat3 := BuildPattern(true, algo.FuzzyMatchV2, false, CaseIgnore, true, true, []Range{}, Delimiter{}, []rune("abc"))
clearPatternCache() clearPatternCache()
pat4 := BuildPattern(true, false, CaseIgnore, true, []Range{}, Delimiter{}, []rune("Abc")) pat4 := BuildPattern(true, algo.FuzzyMatchV2, false, CaseIgnore, true, true, []Range{}, Delimiter{}, []rune("Abc"))
clearPatternCache() clearPatternCache()
pat5 := BuildPattern(true, false, CaseRespect, true, []Range{}, Delimiter{}, []rune("abc")) pat5 := BuildPattern(true, algo.FuzzyMatchV2, false, CaseRespect, true, true, []Range{}, Delimiter{}, []rune("abc"))
clearPatternCache() clearPatternCache()
pat6 := BuildPattern(true, false, CaseRespect, true, []Range{}, Delimiter{}, []rune("Abc")) pat6 := BuildPattern(true, algo.FuzzyMatchV2, false, CaseRespect, true, true, []Range{}, Delimiter{}, []rune("Abc"))
if string(pat1.text) != "abc" || pat1.caseSensitive != false || if string(pat1.text) != "abc" || pat1.caseSensitive != false ||
string(pat2.text) != "Abc" || pat2.caseSensitive != true || string(pat2.text) != "Abc" || pat2.caseSensitive != true ||
@@ -120,7 +132,7 @@ func TestCaseSensitivity(t *testing.T) {
} }
func TestOrigTextAndTransformed(t *testing.T) { func TestOrigTextAndTransformed(t *testing.T) {
pattern := BuildPattern(true, true, CaseSmart, true, []Range{}, Delimiter{}, []rune("jg")) pattern := BuildPattern(true, algo.FuzzyMatchV2, true, CaseSmart, true, true, []Range{}, Delimiter{}, []rune("jg"))
tokens := Tokenize(util.RunesToChars([]rune("junegunn")), Delimiter{}) tokens := Tokenize(util.RunesToChars([]rune("junegunn")), Delimiter{})
trans := Transform(tokens, []Range{Range{1, 1}}) trans := Transform(tokens, []Range{Range{1, 1}})
@@ -133,18 +145,29 @@ func TestOrigTextAndTransformed(t *testing.T) {
transformed: trans}, transformed: trans},
} }
pattern.extended = extended pattern.extended = extended
matches := pattern.matchChunk(&chunk) matches := pattern.matchChunk(&chunk, nil, slab) // No cache
if matches[0].text.ToString() != "junegunn" || string(*matches[0].origText) != "junegunn.choi" || if !(matches[0].item.text.ToString() == "junegunn" &&
matches[0].offsets[0][0] != 0 || matches[0].offsets[0][1] != 5 || string(*matches[0].item.origText) == "junegunn.choi" &&
!reflect.DeepEqual(matches[0].transformed, trans) { reflect.DeepEqual(matches[0].item.transformed, trans)) {
t.Error("Invalid match result", matches) t.Error("Invalid match result", matches)
} }
match, offsets, pos := pattern.MatchItem(chunk[0], true, slab)
if !(match.item.text.ToString() == "junegunn" &&
string(*match.item.origText) == "junegunn.choi" &&
offsets[0][0] == 0 && offsets[0][1] == 5 &&
reflect.DeepEqual(match.item.transformed, trans)) {
t.Error("Invalid match result", match, offsets, extended)
}
if !((*pos)[0] == 4 && (*pos)[1] == 0) {
t.Error("Invalid pos array", *pos)
}
} }
} }
func TestCacheKey(t *testing.T) { func TestCacheKey(t *testing.T) {
test := func(extended bool, patStr string, expected string, cacheable bool) { test := func(extended bool, patStr string, expected string, cacheable bool) {
pat := BuildPattern(true, extended, CaseSmart, true, []Range{}, Delimiter{}, []rune(patStr)) pat := BuildPattern(true, algo.FuzzyMatchV2, extended, CaseSmart, true, true, []Range{}, Delimiter{}, []rune(patStr))
if pat.CacheKey() != expected { if pat.CacheKey() != expected {
t.Errorf("Expected: %s, actual: %s", expected, pat.CacheKey()) t.Errorf("Expected: %s, actual: %s", expected, pat.CacheKey())
} }

View File

@@ -34,7 +34,7 @@ func (r *Reader) feed(src io.Reader) {
if r.delimNil { if r.delimNil {
delim = '\000' delim = '\000'
} }
reader := bufio.NewReader(src) reader := bufio.NewReaderSize(src, readerBufferSize)
for { for {
// ReadBytes returns err != nil if and only if the returned data does not // ReadBytes returns err != nil if and only if the returned data does not
// end in delim. // end in delim.

240
src/result.go Normal file
View File

@@ -0,0 +1,240 @@
package fzf
import (
"math"
"sort"
"github.com/junegunn/fzf/src/curses"
"github.com/junegunn/fzf/src/util"
)
// Offset holds two 32-bit integers denoting the offsets of a matched substring
type Offset [2]int32
type colorOffset struct {
offset [2]int32
color int
bold bool
index int32
}
type rank struct {
points [4]uint16
index int32
}
type Result struct {
item *Item
rank rank
}
func buildResult(item *Item, offsets []Offset, score int, trimLen int) *Result {
if len(offsets) > 1 {
sort.Sort(ByOrder(offsets))
}
result := Result{item: item, rank: rank{index: item.index}}
numChars := item.text.Length()
minBegin := math.MaxUint16
maxEnd := 0
validOffsetFound := false
for _, offset := range offsets {
b, e := int(offset[0]), int(offset[1])
if b < e {
minBegin = util.Min(b, minBegin)
maxEnd = util.Max(e, maxEnd)
validOffsetFound = true
}
}
for idx, criterion := range sortCriteria {
val := uint16(math.MaxUint16)
switch criterion {
case byScore:
// Higher is better
val = math.MaxUint16 - util.AsUint16(score)
case byLength:
// If offsets is empty, trimLen will be 0, but we don't care
val = util.AsUint16(trimLen)
case byBegin:
if validOffsetFound {
whitePrefixLen := 0
for idx := 0; idx < numChars; idx++ {
r := item.text.Get(idx)
whitePrefixLen = idx
if idx == minBegin || r != ' ' && r != '\t' {
break
}
}
val = util.AsUint16(minBegin - whitePrefixLen)
}
case byEnd:
if validOffsetFound {
val = util.AsUint16(1 + numChars - maxEnd)
}
}
result.rank.points[idx] = val
}
return &result
}
// Sort criteria to use. Never changes once fzf is started.
var sortCriteria []criterion
// Index returns ordinal index of the Item
func (result *Result) Index() int32 {
return result.item.index
}
func minRank() rank {
return rank{index: 0, points: [4]uint16{math.MaxUint16, 0, 0, 0}}
}
func (result *Result) colorOffsets(matchOffsets []Offset, color int, bold bool, current bool) []colorOffset {
itemColors := result.item.Colors()
if len(itemColors) == 0 {
var offsets []colorOffset
for _, off := range matchOffsets {
offsets = append(offsets, colorOffset{offset: [2]int32{off[0], off[1]}, color: color, bold: bold})
}
return offsets
}
// Find max column
var maxCol int32
for _, off := range matchOffsets {
if off[1] > maxCol {
maxCol = off[1]
}
}
for _, ansi := range itemColors {
if ansi.offset[1] > maxCol {
maxCol = ansi.offset[1]
}
}
cols := make([]int, maxCol)
for colorIndex, ansi := range itemColors {
for i := ansi.offset[0]; i < ansi.offset[1]; i++ {
cols[i] = colorIndex + 1 // XXX
}
}
for _, off := range matchOffsets {
for i := off[0]; i < off[1]; i++ {
cols[i] = -1
}
}
// sort.Sort(ByOrder(offsets))
// Merge offsets
// ------------ ---- -- ----
// ++++++++ ++++++++++
// --++++++++-- --++++++++++---
curr := 0
start := 0
var colors []colorOffset
add := func(idx int) {
if curr != 0 && idx > start {
if curr == -1 {
colors = append(colors, colorOffset{
offset: [2]int32{int32(start), int32(idx)}, color: color, bold: bold})
} else {
ansi := itemColors[curr-1]
fg := ansi.color.fg
if fg == -1 {
if current {
fg = curses.CurrentFG
} else {
fg = curses.FG
}
}
bg := ansi.color.bg
if bg == -1 {
if current {
bg = curses.DarkBG
} else {
bg = curses.BG
}
}
colors = append(colors, colorOffset{
offset: [2]int32{int32(start), int32(idx)},
color: curses.PairFor(fg, bg),
bold: ansi.color.bold || bold})
}
}
}
for idx, col := range cols {
if col != curr {
add(idx)
start = idx
curr = col
}
}
add(int(maxCol))
return colors
}
// ByOrder is for sorting substring offsets
type ByOrder []Offset
func (a ByOrder) Len() int {
return len(a)
}
func (a ByOrder) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a ByOrder) Less(i, j int) bool {
ioff := a[i]
joff := a[j]
return (ioff[0] < joff[0]) || (ioff[0] == joff[0]) && (ioff[1] <= joff[1])
}
// ByRelevance is for sorting Items
type ByRelevance []*Result
func (a ByRelevance) Len() int {
return len(a)
}
func (a ByRelevance) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a ByRelevance) Less(i, j int) bool {
return compareRanks((*a[i]).rank, (*a[j]).rank, false)
}
// ByRelevanceTac is for sorting Items
type ByRelevanceTac []*Result
func (a ByRelevanceTac) Len() int {
return len(a)
}
func (a ByRelevanceTac) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a ByRelevanceTac) Less(i, j int) bool {
return compareRanks((*a[i]).rank, (*a[j]).rank, true)
}
func compareRanks(irank rank, jrank rank, tac bool) bool {
for idx := 0; idx < 4; idx++ {
left := irank.points[idx]
right := jrank.points[idx]
if left < right {
return true
} else if left > right {
return false
}
}
return (irank.index <= jrank.index) != tac
}

119
src/result_test.go Normal file
View File

@@ -0,0 +1,119 @@
package fzf
import (
"math"
"sort"
"testing"
"github.com/junegunn/fzf/src/curses"
"github.com/junegunn/fzf/src/util"
)
func TestOffsetSort(t *testing.T) {
offsets := []Offset{
Offset{3, 5}, Offset{2, 7},
Offset{1, 3}, Offset{2, 9}}
sort.Sort(ByOrder(offsets))
if offsets[0][0] != 1 || offsets[0][1] != 3 ||
offsets[1][0] != 2 || offsets[1][1] != 7 ||
offsets[2][0] != 2 || offsets[2][1] != 9 ||
offsets[3][0] != 3 || offsets[3][1] != 5 {
t.Error("Invalid order:", offsets)
}
}
func TestRankComparison(t *testing.T) {
rank := func(vals ...uint16) rank {
return rank{
points: [4]uint16{vals[0], vals[1], vals[2], vals[3]},
index: int32(vals[4])}
}
if compareRanks(rank(3, 0, 0, 0, 5), rank(2, 0, 0, 0, 7), false) ||
!compareRanks(rank(3, 0, 0, 0, 5), rank(3, 0, 0, 0, 6), false) ||
!compareRanks(rank(1, 2, 0, 0, 3), rank(1, 3, 0, 0, 2), false) ||
!compareRanks(rank(0, 0, 0, 0, 0), rank(0, 0, 0, 0, 0), false) {
t.Error("Invalid order")
}
if compareRanks(rank(3, 0, 0, 0, 5), rank(2, 0, 0, 0, 7), true) ||
!compareRanks(rank(3, 0, 0, 0, 5), rank(3, 0, 0, 0, 6), false) ||
!compareRanks(rank(1, 2, 0, 0, 3), rank(1, 3, 0, 0, 2), true) ||
!compareRanks(rank(0, 0, 0, 0, 0), rank(0, 0, 0, 0, 0), false) {
t.Error("Invalid order (tac)")
}
}
// Match length, string length, index
func TestResultRank(t *testing.T) {
// FIXME global
sortCriteria = []criterion{byScore, byLength}
strs := [][]rune{[]rune("foo"), []rune("foobar"), []rune("bar"), []rune("baz")}
item1 := buildResult(&Item{text: util.RunesToChars(strs[0]), index: 1}, []Offset{}, 2, 3)
if item1.rank.points[0] != math.MaxUint16-2 || // Bonus
item1.rank.points[1] != 3 || // Length
item1.rank.points[2] != 0 || // Unused
item1.rank.points[3] != 0 || // Unused
item1.item.index != 1 {
t.Error(item1.rank)
}
// Only differ in index
item2 := buildResult(&Item{text: util.RunesToChars(strs[0])}, []Offset{}, 2, 3)
items := []*Result{item1, item2}
sort.Sort(ByRelevance(items))
if items[0] != item2 || items[1] != item1 {
t.Error(items)
}
items = []*Result{item2, item1, item1, item2}
sort.Sort(ByRelevance(items))
if items[0] != item2 || items[1] != item2 ||
items[2] != item1 || items[3] != item1 {
t.Error(items, item1, item1.item.index, item2, item2.item.index)
}
// Sort by relevance
item3 := buildResult(&Item{index: 2}, []Offset{Offset{1, 3}, Offset{5, 7}}, 3, 0)
item4 := buildResult(&Item{index: 2}, []Offset{Offset{1, 2}, Offset{6, 7}}, 4, 0)
item5 := buildResult(&Item{index: 2}, []Offset{Offset{1, 3}, Offset{5, 7}}, 5, 0)
item6 := buildResult(&Item{index: 2}, []Offset{Offset{1, 2}, Offset{6, 7}}, 6, 0)
items = []*Result{item1, item2, item3, item4, item5, item6}
sort.Sort(ByRelevance(items))
if !(items[0] == item6 && items[1] == item5 &&
items[2] == item4 && items[3] == item3 &&
items[4] == item2 && items[5] == item1) {
t.Error(items, item1, item2, item3, item4, item5, item6)
}
}
func TestColorOffset(t *testing.T) {
// ------------ 20 ---- -- ----
// ++++++++ ++++++++++
// --++++++++-- --++++++++++---
offsets := []Offset{Offset{5, 15}, Offset{25, 35}}
item := Result{
item: &Item{
colors: &[]ansiOffset{
ansiOffset{[2]int32{0, 20}, ansiState{1, 5, false}},
ansiOffset{[2]int32{22, 27}, ansiState{2, 6, true}},
ansiOffset{[2]int32{30, 32}, ansiState{3, 7, false}},
ansiOffset{[2]int32{33, 40}, ansiState{4, 8, true}}}}}
// [{[0 5] 9 false} {[5 15] 99 false} {[15 20] 9 false} {[22 25] 10 true} {[25 35] 99 false} {[35 40] 11 true}]
colors := item.colorOffsets(offsets, 99, false, true)
assert := func(idx int, b int32, e int32, c int, bold bool) {
o := colors[idx]
if o.offset[0] != b || o.offset[1] != e || o.color != c || o.bold != bold {
t.Error(o)
}
}
assert(0, 0, 5, curses.ColUser, false)
assert(1, 5, 15, 99, false)
assert(2, 15, 20, curses.ColUser, false)
assert(3, 22, 25, curses.ColUser+1, true)
assert(4, 25, 35, 99, false)
assert(5, 35, 40, curses.ColUser+2, true)
}

View File

@@ -18,6 +18,8 @@ import (
"github.com/junegunn/go-runewidth" "github.com/junegunn/go-runewidth"
) )
// import "github.com/pkg/profile"
type jumpMode int type jumpMode int
const ( const (
@@ -61,6 +63,7 @@ type Terminal struct {
reading bool reading bool
jumping jumpMode jumping jumpMode
jumpLabels string jumpLabels string
printer func(string)
merger *Merger merger *Merger
selected map[int32]selectedItem selected map[int32]selectedItem
reqBox *util.EventBox reqBox *util.EventBox
@@ -73,11 +76,12 @@ type Terminal struct {
initFunc func() initFunc func()
suppress bool suppress bool
startChan chan bool startChan chan bool
slab *util.Slab
} }
type selectedItem struct { type selectedItem struct {
at time.Time at time.Time
text *string text string
} }
type byTimeOrder []selectedItem type byTimeOrder []selectedItem
@@ -266,6 +270,7 @@ func NewTerminal(opts *Options, eventBox *util.EventBox) *Terminal {
reading: true, reading: true,
jumping: jumpDisabled, jumping: jumpDisabled,
jumpLabels: opts.JumpLabels, jumpLabels: opts.JumpLabels,
printer: opts.Printer,
merger: EmptyMerger, merger: EmptyMerger,
selected: make(map[int32]selectedItem), selected: make(map[int32]selectedItem),
reqBox: util.NewEventBox(), reqBox: util.NewEventBox(),
@@ -276,6 +281,7 @@ func NewTerminal(opts *Options, eventBox *util.EventBox) *Terminal {
eventBox: eventBox, eventBox: eventBox,
mutex: sync.Mutex{}, mutex: sync.Mutex{},
suppress: true, suppress: true,
slab: util.MakeSlab(slab16Size, slab32Size),
startChan: make(chan bool, 1), startChan: make(chan bool, 1),
initFunc: func() { initFunc: func() {
C.Init(opts.Theme, opts.Black, opts.Mouse) C.Init(opts.Theme, opts.Black, opts.Mouse)
@@ -343,21 +349,21 @@ func (t *Terminal) UpdateList(merger *Merger) {
func (t *Terminal) output() bool { func (t *Terminal) output() bool {
if t.printQuery { if t.printQuery {
fmt.Println(string(t.input)) t.printer(string(t.input))
} }
if len(t.expect) > 0 { if len(t.expect) > 0 {
fmt.Println(t.pressed) t.printer(t.pressed)
} }
found := len(t.selected) > 0 found := len(t.selected) > 0
if !found { if !found {
cnt := t.merger.Length() cnt := t.merger.Length()
if cnt > 0 && cnt > t.cy { if cnt > 0 && cnt > t.cy {
fmt.Println(t.current()) t.printer(t.current())
found = true found = true
} }
} else { } else {
for _, sel := range t.sortSelected() { for _, sel := range t.sortSelected() {
fmt.Println(*sel.text) t.printer(sel.text)
} }
} }
return found return found
@@ -395,6 +401,8 @@ func displayWidth(runes []rune) int {
const ( const (
minWidth = 16 minWidth = 16
minHeight = 4 minHeight = 4
maxDisplayWidthCalc = 1024
) )
func calculateSize(base int, size sizeSpec, margin int, minSize int) int { func calculateSize(base int, size sizeSpec, margin int, minSize int) int {
@@ -565,11 +573,10 @@ func (t *Terminal) printHeader() {
state = newState state = newState
item := &Item{ item := &Item{
text: util.RunesToChars([]rune(trimmed)), text: util.RunesToChars([]rune(trimmed)),
colors: colors, colors: colors}
rank: buildEmptyRank(0)}
t.move(line, 2, true) t.move(line, 2, true)
t.printHighlighted(item, false, C.ColHeader, 0, false) t.printHighlighted(&Result{item: item}, false, C.ColHeader, 0, false)
} }
} }
@@ -590,7 +597,8 @@ func (t *Terminal) printList() {
} }
} }
func (t *Terminal) printItem(item *Item, i int, current bool) { func (t *Terminal) printItem(result *Result, i int, current bool) {
item := result.item
_, selected := t.selected[item.Index()] _, selected := t.selected[item.Index()]
label := " " label := " "
if t.jumping != jumpDisabled { if t.jumping != jumpDisabled {
@@ -609,14 +617,14 @@ func (t *Terminal) printItem(item *Item, i int, current bool) {
} else { } else {
t.window.CPrint(C.ColCurrent, true, " ") t.window.CPrint(C.ColCurrent, true, " ")
} }
t.printHighlighted(item, true, C.ColCurrent, C.ColCurrentMatch, true) t.printHighlighted(result, true, C.ColCurrent, C.ColCurrentMatch, true)
} else { } else {
if selected { if selected {
t.window.CPrint(C.ColSelected, true, ">") t.window.CPrint(C.ColSelected, true, ">")
} else { } else {
t.window.Print(" ") t.window.Print(" ")
} }
t.printHighlighted(item, false, 0, C.ColMatch, false) t.printHighlighted(result, false, 0, C.ColMatch, false)
} }
} }
@@ -645,6 +653,11 @@ func displayWidthWithLimit(runes []rune, prefixWidth int, limit int) int {
} }
func trimLeft(runes []rune, width int) ([]rune, int32) { func trimLeft(runes []rune, width int) ([]rune, int32) {
if len(runes) > maxDisplayWidthCalc && len(runes) > width {
trimmed := len(runes) - width
return runes[trimmed:], int32(trimmed)
}
currentWidth := displayWidth(runes) currentWidth := displayWidth(runes)
var trimmed int32 var trimmed int32
@@ -667,16 +680,32 @@ func overflow(runes []rune, max int) bool {
return false return false
} }
func (t *Terminal) printHighlighted(item *Item, bold bool, col1 int, col2 int, current bool) { func (t *Terminal) printHighlighted(result *Result, bold bool, col1 int, col2 int, current bool) {
var maxe int item := result.item
for _, offset := range item.offsets {
maxe = util.Max(maxe, int(offset[1]))
}
// Overflow // Overflow
text := make([]rune, item.text.Length()) text := make([]rune, item.text.Length())
copy(text, item.text.ToRunes()) copy(text, item.text.ToRunes())
offsets := item.colorOffsets(col2, bold, current) matchOffsets := []Offset{}
var pos *[]int
if t.merger.pattern != nil {
_, matchOffsets, pos = t.merger.pattern.MatchItem(item, true, t.slab)
}
charOffsets := matchOffsets
if pos != nil {
charOffsets = make([]Offset, len(*pos))
for idx, p := range *pos {
offset := Offset{int32(p), int32(p + 1)}
charOffsets[idx] = offset
}
sort.Sort(ByOrder(charOffsets))
}
var maxe int
for _, offset := range charOffsets {
maxe = util.Max(maxe, int(offset[1]))
}
offsets := result.colorOffsets(charOffsets, col2, bold, current)
maxWidth := t.window.Width - 3 maxWidth := t.window.Width - 3
maxe = util.Constrain(maxe+util.Min(maxWidth/2-2, t.hscrollOff), 0, len(text)) maxe = util.Constrain(maxe+util.Min(maxWidth/2-2, t.hscrollOff), 0, len(text))
if overflow(text, maxWidth) { if overflow(text, maxWidth) {
@@ -866,11 +895,12 @@ func (t *Terminal) isPreviewEnabled() bool {
} }
func (t *Terminal) current() string { func (t *Terminal) current() string {
return t.merger.Get(t.cy).AsString(t.ansi) return t.merger.Get(t.cy).item.AsString(t.ansi)
} }
// Loop is called to start Terminal I/O // Loop is called to start Terminal I/O
func (t *Terminal) Loop() { func (t *Terminal) Loop() {
// prof := profile.Start(profile.ProfilePath("/tmp/"))
<-t.startChan <-t.startChan
{ // Late initialization { // Late initialization
intChan := make(chan os.Signal, 1) intChan := make(chan os.Signal, 1)
@@ -948,6 +978,7 @@ func (t *Terminal) Loop() {
if code <= exitNoMatch && t.history != nil { if code <= exitNoMatch && t.history != nil {
t.history.append(string(t.input)) t.history.append(string(t.input))
} }
// prof.Stop()
os.Exit(code) os.Exit(code)
} }
@@ -1006,7 +1037,7 @@ func (t *Terminal) Loop() {
t.printPreview() t.printPreview()
case reqPrintQuery: case reqPrintQuery:
C.Close() C.Close()
fmt.Println(string(t.input)) t.printer(string(t.input))
exit(exitOk) exit(exitOk)
case reqQuit: case reqQuit:
C.Close() C.Close()
@@ -1037,13 +1068,13 @@ func (t *Terminal) Loop() {
} }
selectItem := func(item *Item) bool { selectItem := func(item *Item) bool {
if _, found := t.selected[item.Index()]; !found { if _, found := t.selected[item.Index()]; !found {
t.selected[item.Index()] = selectedItem{time.Now(), item.StringPtr(t.ansi)} t.selected[item.Index()] = selectedItem{time.Now(), item.AsString(t.ansi)}
return true return true
} }
return false return false
} }
toggleY := func(y int) { toggleY := func(y int) {
item := t.merger.Get(y) item := t.merger.Get(y).item
if !selectItem(item) { if !selectItem(item) {
delete(t.selected, item.Index()) delete(t.selected, item.Index())
} }
@@ -1057,8 +1088,9 @@ func (t *Terminal) Loop() {
for key, ret := range t.expect { for key, ret := range t.expect {
if keyMatch(key, event) { if keyMatch(key, event) {
t.pressed = ret t.pressed = ret
req(reqClose) t.reqBox.Set(reqClose, nil)
break t.mutex.Unlock()
return
} }
} }
@@ -1068,14 +1100,14 @@ func (t *Terminal) Loop() {
case actIgnore: case actIgnore:
case actExecute: case actExecute:
if t.cy >= 0 && t.cy < t.merger.Length() { if t.cy >= 0 && t.cy < t.merger.Length() {
item := t.merger.Get(t.cy) item := t.merger.Get(t.cy).item
t.executeCommand(t.execmap[mapkey], quoteEntry(item.AsString(t.ansi))) t.executeCommand(t.execmap[mapkey], quoteEntry(item.AsString(t.ansi)))
} }
case actExecuteMulti: case actExecuteMulti:
if len(t.selected) > 0 { if len(t.selected) > 0 {
sels := make([]string, len(t.selected)) sels := make([]string, len(t.selected))
for i, sel := range t.sortSelected() { for i, sel := range t.sortSelected() {
sels[i] = quoteEntry(*sel.text) sels[i] = quoteEntry(sel.text)
} }
t.executeCommand(t.execmap[mapkey], strings.Join(sels, " ")) t.executeCommand(t.execmap[mapkey], strings.Join(sels, " "))
} else { } else {
@@ -1137,7 +1169,7 @@ func (t *Terminal) Loop() {
case actSelectAll: case actSelectAll:
if t.multi { if t.multi {
for i := 0; i < t.merger.Length(); i++ { for i := 0; i < t.merger.Length(); i++ {
item := t.merger.Get(i) item := t.merger.Get(i).item
selectItem(item) selectItem(item)
} }
req(reqList, reqInfo) req(reqList, reqInfo)
@@ -1315,6 +1347,11 @@ func (t *Terminal) Loop() {
if !doAction(action, mapkey) { if !doAction(action, mapkey) {
continue continue
} }
// Truncate the query if it's too long
if len(t.input) > maxPatternLength {
t.input = t.input[:maxPatternLength]
t.cx = util.Constrain(t.cx, 0, maxPatternLength)
}
changed = string(previousInput) != string(t.input) changed = string(previousInput) != string(t.input)
} else { } else {
if mapkey == C.Rune { if mapkey == C.Rune {

View File

@@ -18,9 +18,9 @@ type Range struct {
// Token contains the tokenized part of the strings and its prefix length // Token contains the tokenized part of the strings and its prefix length
type Token struct { type Token struct {
text util.Chars text *util.Chars
prefixLength int prefixLength int32
trimLength int trimLength int32
} }
// Delimiter for tokenizing the input // Delimiter for tokenizing the input
@@ -80,9 +80,8 @@ func withPrefixLengths(tokens []util.Chars, begin int) []Token {
prefixLength := begin prefixLength := begin
for idx, token := range tokens { for idx, token := range tokens {
// Need to define a new local variable instead of the reused token to take // NOTE: &tokens[idx] instead of &tokens
// the pointer to it ret[idx] = Token{&tokens[idx], int32(prefixLength), int32(token.TrimLength())}
ret[idx] = Token{token, prefixLength, token.TrimLength()}
prefixLength += token.Length() prefixLength += token.Length()
} }
return ret return ret
@@ -173,25 +172,18 @@ func joinTokens(tokens []Token) []rune {
return ret return ret
} }
func joinTokensAsRunes(tokens []Token) []rune {
ret := []rune{}
for _, token := range tokens {
ret = append(ret, token.text.ToRunes()...)
}
return ret
}
// Transform is used to transform the input when --with-nth option is given // Transform is used to transform the input when --with-nth option is given
func Transform(tokens []Token, withNth []Range) []Token { func Transform(tokens []Token, withNth []Range) []Token {
transTokens := make([]Token, len(withNth)) transTokens := make([]Token, len(withNth))
numTokens := len(tokens) numTokens := len(tokens)
for idx, r := range withNth { for idx, r := range withNth {
parts := []util.Chars{} parts := []*util.Chars{}
minIdx := 0 minIdx := 0
if r.begin == r.end { if r.begin == r.end {
idx := r.begin idx := r.begin
if idx == rangeEllipsis { if idx == rangeEllipsis {
parts = append(parts, util.RunesToChars(joinTokensAsRunes(tokens))) chars := util.RunesToChars(joinTokens(tokens))
parts = append(parts, &chars)
} else { } else {
if idx < 0 { if idx < 0 {
idx += numTokens + 1 idx += numTokens + 1
@@ -235,7 +227,7 @@ func Transform(tokens []Token, withNth []Range) []Token {
case 0: case 0:
merged = util.RunesToChars([]rune{}) merged = util.RunesToChars([]rune{})
case 1: case 1:
merged = parts[0] merged = *parts[0]
default: default:
runes := []rune{} runes := []rune{}
for _, part := range parts { for _, part := range parts {
@@ -244,13 +236,13 @@ func Transform(tokens []Token, withNth []Range) []Token {
merged = util.RunesToChars(runes) merged = util.RunesToChars(runes)
} }
var prefixLength int var prefixLength int32
if minIdx < numTokens { if minIdx < numTokens {
prefixLength = tokens[minIdx].prefixLength prefixLength = tokens[minIdx].prefixLength
} else { } else {
prefixLength = 0 prefixLength = 0
} }
transTokens[idx] = Token{merged, prefixLength, merged.TrimLength()} transTokens[idx] = Token{&merged, prefixLength, int32(merged.TrimLength())}
} }
return transTokens return transTokens
} }

View File

@@ -2,6 +2,7 @@
# http://www.rubydoc.info/github/rest-client/rest-client/RestClient # http://www.rubydoc.info/github/rest-client/rest-client/RestClient
require 'rest_client' require 'rest_client'
require 'json'
if ARGV.length < 3 if ARGV.length < 3
puts "usage: #$0 <token> <version> <files...>" puts "usage: #$0 <token> <version> <files...>"

12
src/util/slab.go Normal file
View File

@@ -0,0 +1,12 @@
package util
type Slab struct {
I16 []int16
I32 []int32
}
func MakeSlab(size16 int, size32 int) *Slab {
return &Slab{
I16: make([]int16, size16),
I32: make([]int32, size32)}
}

View File

@@ -4,6 +4,7 @@ package util
import "C" import "C"
import ( import (
"math"
"os" "os"
"os/exec" "os/exec"
"time" "time"
@@ -17,6 +18,22 @@ func Max(first int, second int) int {
return second return second
} }
// Max16 returns the largest integer
func Max16(first int16, second int16) int16 {
if first >= second {
return first
}
return second
}
// Max32 returns the largest 32-bit integer
func Max32(first int32, second int32) int32 {
if first > second {
return first
}
return second
}
// Min returns the smallest integer // Min returns the smallest integer
func Min(first int, second int) int { func Min(first int, second int) int {
if first <= second { if first <= second {
@@ -33,14 +50,6 @@ func Min32(first int32, second int32) int32 {
return second return second
} }
// Max32 returns the largest 32-bit integer
func Max32(first int32, second int32) int32 {
if first > second {
return first
}
return second
}
// Constrain32 limits the given 32-bit integer with the upper and lower bounds // Constrain32 limits the given 32-bit integer with the upper and lower bounds
func Constrain32(val int32, min int32, max int32) int32 { func Constrain32(val int32, min int32, max int32) int32 {
if val < min { if val < min {
@@ -63,6 +72,15 @@ func Constrain(val int, min int, max int) int {
return val return val
} }
func AsUint16(val int) uint16 {
if val > math.MaxUint16 {
return math.MaxUint16
} else if val < 0 {
return 0
}
return uint16(val)
}
// DurWithin limits the given time.Duration with the upper and lower bounds // DurWithin limits the given time.Duration with the upper and lower bounds
func DurWithin( func DurWithin(
val time.Duration, min time.Duration, max time.Duration) time.Duration { val time.Duration, min time.Duration, max time.Duration) time.Duration {

View File

@@ -452,6 +452,15 @@ class TestGoFZF < TestBase
assert_equal ['55', 'alt-z', '55'], readonce.split($/) assert_equal ['55', 'alt-z', '55'], readonce.split($/)
end end
def test_expect_printable_character_print_query
tmux.send_keys "seq 1 100 | #{fzf '--expect=z --print-query'}", :Enter
tmux.until { |lines| lines[-2].include? '100/100' }
tmux.send_keys '55'
tmux.until { |lines| lines[-2].include? '1/100' }
tmux.send_keys 'z'
assert_equal ['55', 'z', '55'], readonce.split($/)
end
def test_expect_print_query_select_1 def test_expect_print_query_select_1
tmux.send_keys "seq 1 100 | #{fzf '-q55 -1 --expect=alt-z --print-query'}", :Enter tmux.send_keys "seq 1 100 | #{fzf '-q55 -1 --expect=alt-z --print-query'}", :Enter
assert_equal ['55', '', '55'], readonce.split($/) assert_equal ['55', '', '55'], readonce.split($/)
@@ -517,162 +526,91 @@ class TestGoFZF < TestBase
assert_equal input, `#{FZF} -f"!z" -x --tiebreak end < #{tempname}`.split($/) assert_equal input, `#{FZF} -f"!z" -x --tiebreak end < #{tempname}`.split($/)
end end
# Since 0.11.2 def test_tiebreak_index_begin
def test_tiebreak_list writelines tempname, [
input = %w[ 'xoxxxxxoxx',
f-o-o-b-a-r 'xoxxxxxox',
foobar---- 'xxoxxxoxx',
--foobar 'xxxoxoxxx',
----foobar 'xxxxoxox',
foobar-- ' xxoxoxxx',
--foobar--
foobar
] ]
writelines tempname, input
assert_equal %w[ assert_equal [
foobar---- 'xxxxoxox',
--foobar ' xxoxoxxx',
----foobar 'xxxoxoxxx',
foobar-- 'xxoxxxoxx',
--foobar-- 'xoxxxxxox',
foobar 'xoxxxxxoxx',
f-o-o-b-a-r ], `#{FZF} -foo < #{tempname}`.split($/)
], `#{FZF} -ffb --tiebreak=index < #{tempname}`.split($/)
by_length = %w[ assert_equal [
foobar 'xxxoxoxxx',
--foobar 'xxxxoxox',
foobar-- ' xxoxoxxx',
foobar---- 'xxoxxxoxx',
----foobar 'xoxxxxxoxx',
--foobar-- 'xoxxxxxox',
f-o-o-b-a-r ], `#{FZF} -foo --tiebreak=index < #{tempname}`.split($/)
]
assert_equal by_length, `#{FZF} -ffb < #{tempname}`.split($/)
assert_equal by_length, `#{FZF} -ffb --tiebreak=length < #{tempname}`.split($/)
assert_equal %w[ # Note that --tiebreak=begin is now based on the first occurrence of the
foobar # first character on the pattern
foobar-- assert_equal [
--foobar ' xxoxoxxx',
foobar---- 'xxxoxoxxx',
--foobar-- 'xxxxoxox',
----foobar 'xxoxxxoxx',
f-o-o-b-a-r 'xoxxxxxoxx',
], `#{FZF} -ffb --tiebreak=length,begin < #{tempname}`.split($/) 'xoxxxxxox',
], `#{FZF} -foo --tiebreak=begin < #{tempname}`.split($/)
assert_equal %w[ assert_equal [
foobar ' xxoxoxxx',
--foobar 'xxxoxoxxx',
foobar-- 'xxxxoxox',
----foobar 'xxoxxxoxx',
--foobar-- 'xoxxxxxox',
foobar---- 'xoxxxxxoxx',
f-o-o-b-a-r ], `#{FZF} -foo --tiebreak=begin,length < #{tempname}`.split($/)
], `#{FZF} -ffb --tiebreak=length,end < #{tempname}`.split($/)
assert_equal %w[
foobar----
foobar--
foobar
--foobar
--foobar--
----foobar
f-o-o-b-a-r
], `#{FZF} -ffb --tiebreak=begin < #{tempname}`.split($/)
by_begin_end = %w[
foobar
foobar--
foobar----
--foobar
--foobar--
----foobar
f-o-o-b-a-r
]
assert_equal by_begin_end, `#{FZF} -ffb --tiebreak=begin,length < #{tempname}`.split($/)
assert_equal by_begin_end, `#{FZF} -ffb --tiebreak=begin,end < #{tempname}`.split($/)
assert_equal %w[
--foobar
----foobar
foobar
foobar--
--foobar--
foobar----
f-o-o-b-a-r
], `#{FZF} -ffb --tiebreak=end < #{tempname}`.split($/)
by_begin_end = %w[
foobar
--foobar
----foobar
foobar--
--foobar--
foobar----
f-o-o-b-a-r
]
assert_equal by_begin_end, `#{FZF} -ffb --tiebreak=end,begin < #{tempname}`.split($/)
assert_equal by_begin_end, `#{FZF} -ffb --tiebreak=end,length < #{tempname}`.split($/)
end end
def test_tiebreak_white_prefix def test_tiebreak_end
writelines tempname, [ writelines tempname, [
'f o o b a r', 'xoxxxxxxxx',
' foo bar', 'xxoxxxxxxx',
' foobar', 'xxxoxxxxxx',
'----foo bar', 'xxxxoxxxx',
'----foobar', 'xxxxxoxxx',
' foo bar', ' xxxxoxxx',
' foobar--',
' foobar',
'--foo bar',
'--foobar',
'foobar',
] ]
assert_equal [ assert_equal [
' foobar', ' xxxxoxxx',
' foobar', 'xxxxoxxxx',
'foobar', 'xxxxxoxxx',
' foobar--', 'xoxxxxxxxx',
'--foobar', 'xxoxxxxxxx',
'----foobar', 'xxxoxxxxxx',
' foo bar', ], `#{FZF} -fo < #{tempname}`.split($/)
' foo bar',
'--foo bar',
'----foo bar',
'f o o b a r',
], `#{FZF} -ffb < #{tempname}`.split($/)
assert_equal [ assert_equal [
' foobar', 'xxxxxoxxx',
' foobar--', ' xxxxoxxx',
' foobar', 'xxxxoxxxx',
'foobar', 'xxxoxxxxxx',
'--foobar', 'xxoxxxxxxx',
'----foobar', 'xoxxxxxxxx',
' foo bar', ], `#{FZF} -fo --tiebreak=end < #{tempname}`.split($/)
' foo bar',
'--foo bar',
'----foo bar',
'f o o b a r',
], `#{FZF} -ffb --tiebreak=begin < #{tempname}`.split($/)
assert_equal [ assert_equal [
' foobar', ' xxxxoxxx',
' foobar', 'xxxxxoxxx',
'foobar', 'xxxxoxxxx',
' foobar--', 'xxxoxxxxxx',
'--foobar', 'xxoxxxxxxx',
'----foobar', 'xoxxxxxxxx',
' foo bar', ], `#{FZF} -fo --tiebreak=end,length,begin < #{tempname}`.split($/)
' foo bar',
'--foo bar',
'----foo bar',
'f o o b a r',
], `#{FZF} -ffb --tiebreak=begin,length < #{tempname}`.split($/)
end end
def test_tiebreak_length_with_nth def test_tiebreak_length_with_nth
@@ -748,17 +686,6 @@ class TestGoFZF < TestBase
assert_equal output, `#{FZF} -fi -n2,1..2 < #{tempname}`.split($/) assert_equal output, `#{FZF} -fi -n2,1..2 < #{tempname}`.split($/)
end end
def test_tiebreak_end_backward_scan
input = %w[
foobar-fb
fubar
]
writelines tempname, input
assert_equal input.reverse, `#{FZF} -f fb < #{tempname}`.split($/)
assert_equal input, `#{FZF} -f fb --tiebreak=end < #{tempname}`.split($/)
end
def test_invalid_cache def test_invalid_cache
tmux.send_keys "(echo d; echo D; echo x) | #{fzf '-q d'}", :Enter tmux.send_keys "(echo d; echo D; echo x) | #{fzf '-q d'}", :Enter
tmux.until { |lines| lines[-2].include? '2/3' } tmux.until { |lines| lines[-2].include? '2/3' }