Compare commits

..

128 Commits

Author SHA1 Message Date
Andrew Gallant
0fdab0ec5e grep-0.1.9 2018-08-03 16:12:08 -04:00
Andrew Gallant
74ec5b8932 deps: update termcolor and encoding_rs_io 2018-08-03 16:08:57 -04:00
Andrew Gallant
2913fc4cd0 tests: reduce reliance on state in tests
This commit improves the integration test setup by running tests inside
the system's temporary directory instead of within ripgrep's `target`
directory. The motivation here is to attempt to reduce the effect of
unanticipated state on ripgrep's integration tests, such as the presence
of `.gitignore` files in ripgrep's checkout directory hierarchy
(including parent directories).

This doesn't remove all possible state. For example, there's no
guarantee that the system's temporary directory isn't itself within a
git repository. Moreover, there may still be other ignore rules in the
directory tree that might impact test behavior. Fixing this seems
somewhat difficult. Conceptually, it seems like ripgrep should run each
test in its own `chroot`-like environment, but doing this in a
non-annoying and portable way (including Windows) doesn't appear to be
possible.

Another approach to take here might be to teach ripgrep itself that a
particular directory should be treated as root, and therefore, never
look at anything outside that directory. This also seems complex to
implement, but tractable. Let's see how this approach works for now.

Fixes #448, #996
2018-07-29 10:41:03 -04:00
Andrew Gallant
7c412bb2fa tests/style: 80 columns, dammit 2018-07-29 10:41:03 -04:00
Andrew Gallant
651a0f1ddf ignore: fix typo 2018-07-29 08:37:24 -04:00
Andrew Gallant
45473ba48f ignore/style: 80 columns, dammit 2018-07-29 08:31:04 -04:00
Andrew Gallant
0863c75a5a ignore: fix bug in matched_path_or_any_parents
This method was supposed to panic whenever the given path wasn't under
the root of the gitignore patcher. Instead of using assert!, it was using
debug_assert!. This actually caused tests to fail when running under
release mode, because the debug_assert! wouldn't trip.

Fixes #671
2018-07-29 08:30:53 -04:00
Andrew Gallant
d94d99f657 ignore-0.4.3 2018-07-28 11:05:27 -04:00
Andrew Gallant
84585908ac globset-0.4.1 2018-07-28 10:59:54 -04:00
Andrew Gallant
1611c04e6f ignore: respect XDG_CONFIG_DIR/git/config
This commit updates the logic for finding the value of git's
`core.excludesFile` configuration parameter. Namely, we now check
`$XDG_CONFIG_DIR/git/config` in addition to `$HOME/.gitconfig` (where
the latter overrules the former on a knob-by-knob basis).

Fixes #995
2018-07-28 10:04:16 -04:00
dana
d857ad6ed3 complete: add --no-pre, improve --pre/--search-zip exclusivity 2018-07-24 07:21:18 -04:00
Andrew Gallant
4dd2f8e40e deps: update atty and winapi
This updates atty and winapi to their latest versions, including the bug
fix in atty that allows it to work with winapi 0.3.5.
2018-07-22 13:07:48 -04:00
Andrew Gallant
dca8110da2 ripgrep: when given no patterns, don't match
Generally speaking, ripgrep prevents the case of not having any patterns
via its arg parsing. However, it is possible for users to provide a file
of patterns via the `-f` flag. If that file is empty, then ripgrep has
nothing to search for and therefore should not ever produce any match.

One way of fixing this might be to replace the absence of patterns with
a pattern that can never match, but this still requires opening and
searching through every file, which is quite a waste. Instead, we detect
this case explicitly and quit early.

Fixes #900
2018-07-22 12:07:18 -04:00
Andrew Gallant
0d11497d21 tests: be looser with gzip failure
Don't expect the exact error message. Instead, just ask that the error
message exist and be non-empty.

Fixes #903
2018-07-22 11:08:16 -04:00
Andrew Gallant
22ac2e056e ripgrep: stop early when --files --quiet is used
This commit tweaks the implementation of the --files flag to stop early
when --quiet is provided.

Fixes #907
2018-07-22 11:05:24 -04:00
Andrew Gallant
03af61fc7b ripgrep: don't skip tar archives
This removes logic from the decompressor for skipping tar archives. This
logic was originally added under the assumption that we probably want to
avoid the cost of reading them. However, this is generally inconsistent
with how ripgrep treats files like tar archives: it should search them
and do binary detection like normal.

Fixes #918
2018-07-22 10:59:09 -04:00
Andrew Gallant
560dffd247 ripgrep: add --no-ignore-global flag
This commit adds a new --no-ignore-global flag that permits disabling
the use of global gitignore filtering. Global gitignores are generally
found in `$HOME/.config/git/ignore`, but its location can be configured
via git's `core.excludesFile` option.

Closes #934
2018-07-22 10:42:32 -04:00
Andrew Gallant
e65ca21a6c ignore: only respect .gitignore in git repos
This commit fixes an interesting bug in the `ignore` crate where it
would basically respect any `.gitignore` file anywhere (including global
gitignores in `~/.config/git/ignore`), regardless of whether we were
searching in a git repository or not. This commit rectifies that
behavior to only respect gitignore rules when in a git repo.

The key change here is to move the logic of whether to traverse parents
into the directory matcher rather than putting the onus on the directory
traverser. In particular, we now need to traverse parent directories in
more cases than we previously did, since we need to determine whether
we're in a git repository or not.

Fixes #934
2018-07-22 10:33:23 -04:00
Andrew Gallant
6771626553 ripgrep: improve usage documentation
This shows an example for reading stdin.

Fixes #951
2018-07-22 09:38:49 -04:00
Andrew Gallant
b9c922be53 ripgrep: better --path-separator error message
This commit improves the error message when --path-separator fails. Namely,
it prints the separator it got and also prints a notice for Windows users
for common failure modes.

Fixes #957
2018-07-22 09:33:03 -04:00
Andrew Gallant
7a44cad599 deps: pin winapi to 0.3.4
winapi 0.3.5 changed how it represents some of its structs, which caused
a bug to surface in atty that prevents tty detection on Windows. atty
has an open PR to fix this: https://github.com/softprops/atty/pull/28

Until a new release of atty, we pin winapi to a version that works.
2018-07-22 09:31:22 -04:00
Andrew Gallant
6dc09c5b1b ripgrep: add --no-pre switch
This switch explicitly disables the --pre behavior. An empty string will
also disable --pre behavior, but actually utterring an empty flag value
is quite awkward, so we provide an explicit switch to do the same thing.

Thanks to @c-blake for pointing this out.
2018-07-21 20:57:27 -04:00
Andrew Gallant
209a125ea2 ripgrep: replace decoder with encoding_rs_io
This commit mostly moves the transcoder implementation to its own
crate: https://github.com/BurntSushi/encoding_rs_io

The new crate adds clear documentation and cleans up the implementation
to fully implement the contract of io::Read.
2018-07-21 20:36:32 -04:00
Andrew Gallant
090216cf00 ripgrep: reformat --pre warning 2018-07-21 17:53:55 -04:00
Andrew Gallant
c96a358593 ripgrep: add warning to --pre flag
The --pre flag can result in a pretty large performance penalty, so put
a warning in the flag documentation. This warning is important because a
flag like this could easily wind up in a user's configuration file.
2018-07-21 17:50:54 -04:00
Charles Blake
231456c409 ripgrep: add --pre flag
The preprocessor flag accepts a command program and executes this
program for every input file that is searched. Instead of searching the
file directly, ripgrep will instead search the stdout contents of the
program.

Closes #978, Closes #981
2018-07-21 17:25:12 -04:00
Kalle Samuels
1d09d4d31b ripgrep: add support for lz4 decompression
This uses the lz4 binary for decompression.

Closes #898
2018-07-21 16:26:39 -04:00
Andrew Gallant
02f08f3800 changelog: updates
We continue our quest to update the CHANGELOG incrementally.
2018-07-21 14:23:38 -04:00
Mateusz Mikuła
470afa1bd7 snap: build without wrappers
Wrappers usually define things required for snaps to work like library path.
Ripgrep being static binary doesn't need it at all.

This will give minor speed up in scenarios when ripgrep is ran multiple
times and can be considered as good practice for static binaries.

PR #979
2018-07-21 13:28:27 -04:00
phiresky
aa2ce39d14 ignore: fix has_any_ignore_rules for explicit ignores
When building a ignore::WalkBuilder by disabling all standard filters
and adding a custom global ignore file, the ignore file is not used. Example:

    let mut walker = ignore::WalkBuilder::new(dir);
    walker.standard_filters(false);
    walker.add_ignore(myfile);

This makes it impossible to use the ignore crate to walk a directory
with only custom ignore files. Very similar to issue #800 (fixed in
b71a110).

PR #988
2018-07-21 13:26:54 -04:00
Andrew Gallant
d11a3b3377 crates.io: use 'OR' instead of '/'
Fixes #987
2018-07-21 13:25:39 -04:00
Andrew Gallant
d09e2f6af1 globset: clarify documentation on regex method
This makes it clear that the `bytes` API of the regex crate should be
used instead of the Unicode API.

Fixes #985
2018-07-21 13:23:46 -04:00
Andrew Gallant
7b6af5a177 deps: update regex to 1.0.2
And also update to regex-syntax 0.6.2.
2018-07-18 09:29:04 -04:00
Andrew Gallant
9bd1aa1c04 readme: update rogue 1.20 reference 2018-07-17 20:41:34 -04:00
Andrew Gallant
1393ce4b6b deps: update all transitive dependencies
This updates all remaining transitive dependencies. Most changes appear
minor and there appear to be no minimum Rust version conflicts. Yay!
2018-07-17 20:34:03 -04:00
Andrew Gallant
8e358ee056 deps: bump various dependencies
Nothing major here. All patch releases. This should bring us completely
up to date with all direct dependencies.
2018-07-17 20:33:13 -04:00
Andrew Gallant
5b5f4e74d9 deps: bump encoding_rs to 0.8
This brings in performance improvements.
2018-07-17 20:29:41 -04:00
Andrew Gallant
7829850bf0 deps: bump minimum Rust to 1.23.0 from 1.20.0
1.23.0 is the first Rust release of 2018 and is around half a year old,
which seems old enough to move to. This also lets us bring in encoding_rs
0.8, which includes performance optimizations.
2018-07-17 20:29:20 -04:00
Andrew Gallant
06b66efd59 deps: get rid of unstable feature
This was introduced as a temporary measure for dealing with the regex
crate's unstable feature, but it was never included in a release of
ripgrep. Thus, we remove it. The regex crate will now automatically enable
SIMD optimizations when available.
2018-07-17 20:27:04 -04:00
Andrew Gallant
7e5a590276 grep: small literal detection fix
This commit tweaks the inner literal detection heuristic such that if it
comes up with any literal that is all whitespace, then it's likely a bad
literal to look for since it's so common. Therefore, we simply reject the
inner literal optimization in this case and let the regex engine do its
thang.
2018-07-17 20:27:04 -04:00
Andrew Gallant
d17ca45063 deps: update termcolor to 1.0.0 2018-07-17 18:37:02 -04:00
Andrew Gallant
df469fe1b4 termcolor: moved to its own repository
We also move wincolor with it.

Fixes #924
2018-07-17 18:03:38 -04:00
Andrew Gallant
b64475aeac ignore: permit use of env::home_dir
Upstream deprecated env::home_dir because of minor bugs in some corner
cases. We should probably eventually migrate to a correct implementation
in the `dirs` crate, but using the buggy version is just fine for now.
2018-07-16 17:40:02 -04:00
dana
fca9709d94 complete: improve zsh completion
- Use groups with _arguments
- Conditionally complete 'negation' options (--messages, --no-column, &c.)
- Improve option exclusivity
- Improve option descriptions
- Improve completion of colour 'whens'
- Improve completion of colour specs
- Remove some unnecessary work-arounds
- Use more idiomatic conventions
2018-07-06 10:23:20 -04:00
dana
62b4813b8a ci: minor improvements to test_complete.sh 2018-07-06 10:23:20 -04:00
dana
b38b101c77 ripgrep: rename --maxdepth to --max-depth
We keep the old `--maxdepth` spelling to preserve backward
compatibility.

PR #967
2018-06-25 20:22:09 -04:00
dana
ac90316e35 ripgrep: add --fixed-strings flag
Fixes #964

PR #965
2018-06-25 17:02:02 -04:00
Andrew Gallant
a6467f880a ripgrep: disable autotests
This keeps us working in Rust 2018.

Fixes #923
2018-06-23 20:49:05 -04:00
Andrew Gallant
004bb35694 ripgrep/printer: fix small performance regression
This commit removes an unconditional extra regex search that is in fact
not always necessary. This can result in a 2x performance improvement in
cases where ripgrep reports many matches.

The fix itself isn't ideal, but we continue to punt on cleaning up the
printer until it is rewritten for libripgrep, which is happening Real
Soon Now.

Fixes #955
2018-06-23 20:49:05 -04:00
Andrew Gallant
cd6c190967 ripgrep: use new BufferedStandardStream from termcolor
Specifically, this will use a buffered writer when not printing to a tty.
This fixes a long standing performance regression where ripgrep would
slow down dramatically if it needed to report a lot of matches.

Fixes #955
2018-06-23 20:49:05 -04:00
Andrew Gallant
d5139228e5 termcolor: add BufferedStandardStream
This commit adds a new type, BufferedStandardStream, which emulates the
StandardStream API (sans `lock`), but will internally use a buffered
writer.

To achieve this, we add a new default method to the WriteColor trait that
indicates whether the underlying writer must synchronously communicate
with an API to control coloring (e.g., the Windows console API). The new
BufferedStandardStream then uses this default method to determine how
eager it should be to flush its buffer before employing color settings.
This should have basically zero overhead when using ANSI color escape
sequences.
2018-06-23 20:49:05 -04:00
Stepan Koltsov
bb16ba6311 ignore/types: add Bazel
`BUILD`, `WORKSPACE` are mentioned here:
https://docs.bazel.build/versions/master/build-ref.html

`*.bzl` is mentioned here:
https://docs.bazel.build/versions/master/skylark/concepts.html

PR #960
2018-06-23 15:25:43 -04:00
Andrew Gallant
5e85f2577b deps: update to regex 1.0.1
This causes SIMD to kick in automatically when compiling with stable
Rust 1.27+.

We also update the README to describe the current state of things.

Thanks to @hartley for pointing this out:
https://twitter.com/hartley/status/1009950392862453760
2018-06-21 20:14:23 -04:00
Jon Surrell
ca23a170f7 ripgrep: use exit code 2 to indicate error
Exit code 1 was shared to indicate both "no results" and "error." Use
status code 2 to indicate errors, similar to grep's behavior.

Fixes #948 

PR #954
2018-06-19 07:41:44 -04:00
Mårten Kongstad
223d7d9846 ignore/types: add Android makefile types
The Android platform is gradually moving from Makefiles (*.mk) to
Blueprint files (*.bp). Add support for both. See [1] for more
information.

1. https://android.googlesource.com/platform/build/blueprint/+/master-soong
2018-06-15 08:29:53 -04:00
Mårten Kongstad
e4bce86111 ignore/types: add AIDL file type
Add support for Android Interface Definition Language files (*.aidl).
See [1] for more information.

1. https://developer.android.com/guide/components/aidl
2018-06-15 08:29:53 -04:00
Tim Kilbourn
15fa77cdb3 ignore/types: add FIDL type
FIDL is the Fuchsia Interface Definition Language
https://fuchsia.googlesource.com/zircon/+/HEAD/docs/fidl/index.md
2018-06-06 07:42:08 -04:00
Ahmed El Gabri
c3f97513d6 doc: sync config file examples
This brings the examples for configuration files in sync between
the man page and the guide.
2018-05-25 06:42:05 -04:00
George Plymale II
c21b9b20cf doc: add MacPorts installation instructions 2018-05-24 13:01:28 -04:00
Khalid Jebbari
ce145c6a2a doc: update crates.io badge
This is the official markdown snippet from shields.io
2018-05-24 06:46:08 -04:00
Wesley Moore
a383d5c4e9 doc: add BSD packages to README 2018-05-14 06:45:39 -04:00
Michael Hay
64317bda9f doc: fix broken link to RegexSet docs 2018-05-08 12:03:47 -04:00
Stephen E. Baker
7f3a0f0828 ignore/types: add jsp extension to java type 2018-05-08 12:03:19 -04:00
Bastien Orivel
49f36c7dcd deps: update regex to 1.0
We retain the `simd-accel` feature on globset for backwards
compatibility, but will remove it in the next semver release.
2018-05-07 13:07:30 -04:00
Garrett Squire
83b4fdb8d6 ignore/doc: improve docs for case_insensitive
This commit updates the OverrideBuilder and GitignoreBuilder docs
for the case_insensitive method, denoting that it must be called before
adding any patterns.
2018-05-06 19:03:11 -04:00
Elliott Slaughter
8b57d78b96 doc: update suggested version for ignore 2018-05-06 19:02:08 -04:00
Bram Geron
a2d8c49d6f ignore/types: add shorthand 'hs' for Haskell 2018-05-03 09:05:00 -04:00
Andrew Gallant
6ffb4b7466 doc: go away snap
Snap has caused a number of nonsensical bug reports, and not even the
`--classic` flag seems capable of fixing them. Therefore, remove snap
from the README and put in a special line in the ISSUE_TEMPLATE about
snap.

FIxes #902
2018-04-30 15:25:51 -04:00
Andrew Gallant
198d1fede9 ignore/types: fix typo in puppet glob
Thanks @CYBAI for noticing this!

See #899
2018-04-29 09:30:44 -04:00
Zach Crownover
667b9a7d62 ignore/types: add puppet
Puppet is primarily written in it's own format of .pp files, but
custom facts and functions are often written in Ruby. The templating
language is ERB and so this will allow scanning of any of the three
most commonly used formats for Puppet specific things.
2018-04-29 09:21:48 -04:00
Andrew Gallant
1f528f1641 doc: update crates.io description
This brings it in line with the README.
2018-04-26 17:04:35 -04:00
Andrew Gallant
ab64da73ab ignore: speed up Gitignore::empty
This commit makes Gitignore::empty a bit faster by avoiding allocation
and manually specializing the implementation instead of routing it through
the GitignoreBuilder.

This helps improve uses of ripgrep that traverse *many* directories, and
in particular, when the use of ignores is disabled via command line
switches.

Fixes #835, Closes #836
2018-04-24 11:19:03 -04:00
Jonathan Klimt
1266de3d4c ignore/types: add verilog 2018-04-24 09:00:03 -04:00
Andrew Gallant
bf51058eb2 tests: fix tests on Windows
A bug in the atty crate was previously masking a problem with the
integration tests on Windows. Namely, the bug in atty resulted in
atty::is(Stdin) returning true if we couldn't get the file name for the
stdin stream. This in turn caused tests like `rg foo` to search the CWD,
which was the intended behavior. However, once the atty bug was fixed,
atty::is(Stdin) no longer returned true, causing `rg foo` searches to
fail.

On Unix-like systems, the atty behavior has always been correct.
However, on Unix-like systems we have a decent way of detecting whether
stdin is readable or not. If it isn't---which is the case in the
integration tests---then we fall back to searching the CWD. On Windows
however, we haven't yet implemented anything to detect whether stdin is
readable or not, so we must always assume that it is. Therefore, we
never get the "go ahead" to search the CWD and the tests fail.

Most of the tests are written to search the CWD explicitly, but there
were a few stragglers that don't.

This isn't great, and we should try to figure out how to do better stdin
detection on Windows.
2018-04-23 20:37:59 -04:00
Andrew Gallant
3dc6fe6f05 output: remove unnecessary mut binding 2018-04-23 20:06:03 -04:00
Andrew Gallant
06438d5360 changelog: update 2018-04-23 20:01:57 -04:00
Andrew Gallant
ae6f871491 output: remove --line-number-width flag
This commit does what no software project has ever done before: we've
outright removed a flag with no possible way to recapture its
functionality.

This flag presents numerous problems in that it never really worked well
in the first place, and completely falls over when ripgrep uses the
--no-heading output format. Well meaning users want ripgrep to fix this
by getting into the alignment business by buffering all output, but that
is a line that I refuse to cross.

Fixes #795
2018-04-23 19:57:22 -04:00
Andrew Gallant
ed059559cd deps: update to atty 0.2.9
https://github.com/softprops/atty/pull/25 was merged, so we can upgrade.
2018-04-23 19:32:39 -04:00
Andrew Gallant
b75526bd7f output: add --no-column flag
This disables columns in the output if they were otherwise enabled.

Fixes #880
2018-04-23 19:26:58 -04:00
Andrew Gallant
507801c1f2 ignore: support .git directory OR file
This improves support for submodules, which seem to use a '.git' file
instead of a '.git' directory to indicate a worktree.

Fixes #893
2018-04-23 18:33:25 -04:00
Andrew Gallant
2a9d007261 complete: add --no-ignore-messages 2018-04-23 18:29:02 -04:00
Andrew Gallant
0ee0b160b5 logging: add new --no-ignore-messages flag
The new --no-ignore-messages flag permits suppressing errors related to
parsing .gitignore or .ignore files. These error messages can be somewhat
annoying since they can surface from repositories that one has no control
over.

Fixes #646
2018-04-23 18:18:44 -04:00
Andrew Gallant
b4781e2f91 doc: more specific docs for --no-messages
This makes it clear that the --no-messages flag doesn't actually
suppress all error messages, and is therefore not equivalent to
redirecting stderr to /dev/null.

See also: #860
2018-04-23 18:07:57 -04:00
Jeremy Day
8cb03941e6 doc: add search and replace faq
Closes #870
2018-04-23 17:45:26 -04:00
Andrew Gallant
6b15ce2342 deps: update remove_dir_all 2018-04-21 12:13:16 -04:00
Andrew Gallant
4c0b0c6c9d ignore: release 0.4.2 2018-04-21 12:10:16 -04:00
Andrew Gallant
6c8b1e93d5 globset: release 0.4.0 2018-04-21 12:09:15 -04:00
Andrew Gallant
ebdb7c1d4c ignore: impl Clone for DirEntry
There is a small hiccup here in that a `DirEntry` can embed errors
associated with reading an ignore file, which can be accessed and logged
by consumers if desired. That error type can contain an io::Error, which
isn't cloneable. We therefore implement Clone on our library's error
type in a way that re-creates the I/O error as best as possible.

Fixes #891
2018-04-21 12:01:11 -04:00
Andrew Gallant
58bd0c67da deps: pin to atty 0.2.6
atty 0.2.7 (and 0.2.8) contain a regression in cygwin terminals that
prevents basic use of ripgrep, and is also the cause of the Windows CI
test failures. For now, we pin to 0.2.6, but a patch has been submitted
upstream: https://github.com/softprops/atty/pull/25
2018-04-21 12:01:11 -04:00
Andrew Gallant
1503b3175f readme: add --classic flag to snap install
It would be nicer to switch to the `ripgrep` snap package, but
apparently it is configured to install with a binary name `ripgrep.rg`
instead of just `rg`. *sigh*
2018-04-17 06:43:43 -04:00
Andrew Gallant
0345e089aa deps: update regex-syntax 2018-04-15 08:45:05 -04:00
Avindra Goolcharan
0911ab1546 readme: add openSUSE Tumbleweed package
Link to software.opensuse.org package and add `zypper` instructions.
2018-04-09 07:22:04 -04:00
FlorentBecker
c4dd927a13 ignore: add Clone/Debug for builders 2018-04-05 08:06:26 -04:00
Andrew Gallant
34abed597f deps: update all dependencies
In particular, we can now drop rand 0.3.
2018-04-01 10:59:44 -04:00
Andrew Gallant
835600794f termcolor: release 0.3.6 2018-03-26 17:28:21 -04:00
ehuss
07713fb5c5 termcolor: fix bold + intense colors in Win 10
There is an issue with the Windows 10 console where if you issue the bold
escape sequence after one of the extended foreground colors, it overrides the
color.  This happens in termcolor if you have bold, intense, and color set.
The workaround is to issue the bold sequence before the color.

Fixes rust-lang/rust#49322
2018-03-26 16:42:48 -04:00
Dezhi “Andy” Fang
d7c9323a68 deps: update regex
This fixes build failures on latest nightly with SIMD features.
2018-03-17 19:33:34 -04:00
Andrew Gallant
b7d29d126f deps: update clap, atty, libc
Nothing to see here.

Note that we continue to refrain to update tempdir, which means we are
still bringing in rand 0.4 and rand 0.3. Updating tempdir brings in an
old version of remove_dir_all, which in turn brings in winapi 0.2. No
thanks.
2018-03-13 22:55:39 -04:00
Andrew Gallant
42b8132d0a grep: add "perfect" smart case detection
This commit removes the previous smart case detection logic and replaces
it with detection based on the regex AST. This particular AST is a faithful
representation of the concrete syntax, which lets us be very precise in
how we handle it.

Closes #851
2018-03-13 22:55:39 -04:00
Andrew Gallant
cd08707c7c grep: upgrade to regex-syntax 0.5
This update brings with it many bug fixes:

  * Better error messages are printed overall. We also include
    explicit call out for unsupported features like backreferences
    and look-around.
  * Regexes like `\s*{` no longer emit incomprehensible errors.
  * Unicode escape sequences, such as `\u{..}` are now supported.

For the most part, this upgrade was done in a straight-forward way. We
resist the urge to refactor the `grep` crate, in anticipation of it
being rewritten anyway.

Note that we removed the `--fixed-strings` suggestion whenever a regex
syntax error occurs. In practice, I've found that it results in a lot of
false positives, and I believe that its use is not as paramount now that
regex parse errors are much more readable.

Closes #268, Closes #395, Closes #702, Closes #853
2018-03-13 22:55:39 -04:00
Andrew Gallant
c2e97cd858 changelog: update for 0.9.0 2018-03-12 23:21:42 -04:00
Andrew Gallant
1f70e9187c deps: update regex crate
This update brings with it a new feature of the regex crate which will
now use SIMD optimizations automatically at runtime with no necessary
compile time flags. All that's needed is to enable the `unstable` feature.

Other crates, such as bytecount and encoding_rs, are still using the
old-style SIMD support, so we leave the simd-accel and avx-accel features.
However, the binaries we distribute on Github no longer have those
features enabled, which makes them truly portable.

Fixes #135
2018-03-12 23:21:42 -04:00
Markus Staab
7120f32258 globset/doc: update README for 0.3 release 2018-03-12 07:19:55 -04:00
Balaji Sivaraman
00520b30f5 output: add --stats flag
This commit provides basic support for a --stats flag, which will print
various aggregate statistics about a search after all of the results
have been printed. This is mostly intended to support a similar feature
found in the Silver Searcher. Note though that we don't emit the total
bytes searched; this is a first pass at an implementation and we can
improve upon it later.

Closes #411, Closes #799
2018-03-10 10:59:00 -05:00
Andrew Gallant
11a8f0eaf0 args: treat --count --only-matching as --count-matches
Namely, when ripgrep is asked to count things and is also asked to print
every match on its own line, then we should just automatically count the
matches and not the lines. This is a departure from how GNU grep behaves,
but there is a compelling argument to be made that GNU grep's behavior
doesn't make a lot of sense.

Note that since this changes the behavior of combining two existing
flags, this is a breaking change.
2018-03-10 10:38:34 -05:00
Balaji Sivaraman
27fc9f2fd3 search: add a --count-matches flag
This commit introduces a new flag, --count-matches, which will cause
ripgrep to report a total count of all matches instead of a count of
total lines matched.

Closes #566, Closes #814
2018-03-10 10:38:25 -05:00
Balaji Sivaraman
96f73293c0 cleanup: rename match_count to match_line_count 2018-03-10 10:23:38 -05:00
Balaji Sivaraman
b006943c01 search: add -b/--byte-offset flag
This commit adds support for printing 0-based byte offset before each
line. We handle corner cases such as `-o/--only-matching` and
`-C/--context` as well.

Closes #812
2018-03-10 10:15:19 -05:00
Brian Malehorn
91d0756f62 ignore: support backslash escaping
Use the new `Globset::backslash_escape` knob to conform to git behavior:
`\` will escape the following character. For example, the pattern `\*`
will match a file literally named `*`.

Also tweak a test in ripgrep that was relying on this incorrect
behavior.

Closes #526, Closes #811
2018-03-10 09:30:55 -05:00
Andrew Gallant
54256515b4 globset: make ErrorKind enum extensible
This commit makes the ErrorKind enum extensible by adding a
__Nonexhaustive variant. Callers should use this as a hint that
exhaustive case analysis isn't possible in a stable way since new
variants may be added in the future without a semver bump.
2018-03-10 09:30:55 -05:00
Brian Malehorn
e2516ed095 globset: support backslash escaping
From `man 7 glob`:

    One can remove the special meaning of '?', '*' and '[' by preceding
    them by a backslash, or, in case this is part of a shell command
    line, enclosing them in quotes.

Conform to glob / fnmatch / git implementations by making `\` escape the
following character - for example `\?` will match a literal `?`.

However, only enable this by default on Unix platforms. Windows builds
will continue to use `\` as a path separator, but can still get the new
behavior by calling `globset.backslash_escape(true)`.

Adding tests for the `Globset::backslash_escape` option was a bit
involved, since the default value of this option is platform-dependent.

Extend the options framework to hold an `Option<T>` for each
knob, where `None` means "default" and `Some(v)` means "override with
`v`". This way we only have to specify the default values once in
`GlobOptions::default()` rather than replicated in both code and tests.

Finally write a few behavioral tests, and some tests to confirm it
varies by platform.
2018-03-10 09:30:55 -05:00
Alejandro Barreto
c0c80e0209 doc: add Windows Scoop install instructions 2018-03-10 08:15:22 -05:00
Andrew Gallant
dbf6f15625 mmap: handle ENOMEM error
This commit causes a memory map strategy to fall back to a file backed
strategy if the mmap call fails with an `ENOMEM` error.

Fixes #852
2018-03-10 08:13:27 -05:00
Ryan Hayle
9163aaac27 doc: add additional instructions for snap
In the snap store, ripgrep 0.8.1 is only available as a candidate
release, while the default (stable) release is 0.7.1.
2018-03-09 07:09:02 -05:00
Patrick Artounian
9d7448bfc0 doc: shorten Rust nightly brew install command
The burntsushi/ripgrep/ prefix is not needed.
2018-03-07 12:44:06 -05:00
Richard Dodd (dodj)
b98585b429 termcolor/doc: fix typo 2018-03-03 09:20:28 -05:00
Andrew Gallant
f5411b992c doc: use PATTERNFILE for -f/--file flag
Fixes #832
2018-02-23 12:18:15 -05:00
Anthony Wong
492effc7be pkg: update snapcraft.yaml
Update version number and don't declare alias as it has been deprecated.
2018-02-23 06:51:39 -05:00
Andrew Gallant
4889d2d37c doc: clarify licensing story 2018-02-22 19:01:43 -05:00
Andrew Gallant
354996a16f doc: clarify snap installation instructions
Fixes #782
2018-02-22 16:56:22 -05:00
Andrew Gallant
cbebb010a7 doc: fix typos 2018-02-21 15:59:12 -05:00
Andrew Gallant
7098daf6a8 doc: "to rip" means "fast"
Answer the origin story of ripgrep's name.
2018-02-21 15:53:50 -05:00
Andrew Gallant
17d09c0882 pkg: update brew tap to 0.8.1 2018-02-20 21:51:41 -05:00
Andrew Gallant
c8e9f25b85 ci: fix macOS asciidoc installation 2018-02-20 21:05:40 -05:00
Andrew Gallant
9305f89f39 ci: fix man page generation on macOS 2018-02-20 20:55:12 -05:00
Andrew Gallant
9c216ad9a4 release: 0.8.1 2018-02-20 20:19:03 -05:00
Andrew Gallant
6862e07870 changelog: 0.8.1 2018-02-20 20:18:01 -05:00
Andrew Gallant
a6d09b2d42 deps: update to clap 2.30.0 2018-02-20 20:16:57 -05:00
63 changed files with 2737 additions and 3620 deletions

View File

@@ -16,6 +16,7 @@ addons:
- zsh
# Needed for testing decompression search.
- xz-utils
- liblz4-tool
matrix:
fast_finish: true
include:
@@ -30,7 +31,8 @@ matrix:
env: TARGET=x86_64-unknown-linux-musl
- os: osx
rust: nightly
env: TARGET=x86_64-apple-darwin
# XML_CATALOG_FILES is apparently necessary for asciidoc on macOS.
env: TARGET=x86_64-apple-darwin XML_CATALOG_FILES=/usr/local/etc/xml/catalog
- os: linux
rust: nightly
env: TARGET=arm-unknown-linux-gnueabihf GCC_VERSION=4.8
@@ -58,13 +60,13 @@ matrix:
# Minimum Rust supported channel. We enable these to make sure ripgrep
# continues to work on the advertised minimum Rust version.
- os: linux
rust: 1.20.0
rust: 1.23.0
env: TARGET=x86_64-unknown-linux-gnu
- os: linux
rust: 1.20.0
rust: 1.23.0
env: TARGET=x86_64-unknown-linux-musl
- os: linux
rust: 1.20.0
rust: 1.23.0
env: TARGET=arm-unknown-linux-gnueabihf GCC_VERSION=4.8
addons:
apt:

View File

@@ -1,3 +1,154 @@
0.9.0 (TBD)
===========
This is a new minor version release of ripgrep that mostly contains bug fixes.
Releases provided on Github for `x86` and `x86_64` will now work on all target
CPUs, and will also automatically take advantage of features found on modern
CPUs (such as AVX2) for additional optimizations.
This release increases the **minimum supported Rust version** from 1.20.0 to
1.23.0.
**BREAKING CHANGES**:
* When `--count` and `--only-matching` are provided simultaneously, the
behavior of ripgrep is as if the `--count-matches` flag was given. That is,
the total number of matches is reported, where there may be multiple matches
per line. Previously, the behavior of ripgrep was to report the total number
of matching lines. (Note that this behavior diverges from the behavior of
GNU grep.)
* Octal syntax is no longer supported. ripgrep previously accepted expressions
like `\1` as syntax for matching `U+0001`, but ripgrep will now report an
error instead.
* The `--line-number-width` flag has been removed. Its functionality was not
carefully considered with all ripgrep output formats.
See [#795](https://github.com/BurntSushi/ripgrep/issues/795) for more
details.
Feature enhancements:
* Added or improved file type filtering for Android, Bazel, Fuschia, Haskell,
Java and Puppet.
* [FEATURE #411](https://github.com/BurntSushi/ripgrep/issues/411):
Add a `--stats` flag, which emits aggregate statistics after search results.
* [FEATURE #646](https://github.com/BurntSushi/ripgrep/issues/646):
Add a `--no-ignore-messages` flag, which suppresses parse errors from reading
`.ignore` and `.gitignore` files.
* [FEATURE #702](https://github.com/BurntSushi/ripgrep/issues/702):
Support `\u{..}` Unicode escape sequences.
* [FEATURE #812](https://github.com/BurntSushi/ripgrep/issues/812):
Add `-b/--byte-offset` flag that reports byte offset of each matching line.
* [FEATURE #814](https://github.com/BurntSushi/ripgrep/issues/814):
Add `--count-matches` flag, which is like `--count`, but for each match.
* [FEATURE #880](https://github.com/BurntSushi/ripgrep/issues/880):
Add a `--no-column` flag, which disables column numbers in the output.
* [FEATURE #898](https://github.com/BurntSushi/ripgrep/issues/898):
Add support for `lz4` when using the `-z/--search-zip` flag.
* [FEATURE #924](https://github.com/BurntSushi/ripgrep/issues/924):
`termcolor` has moved to its own repository:
https://github.com/BurntSushi/termcolor
* [FEATURE #934](https://github.com/BurntSushi/ripgrep/issues/934):
Add a new flag, `--no-ignore-global`, that permits disabling global
gitignores.
* [FEATURE #967](https://github.com/BurntSushi/ripgrep/issues/967):
Rename `--maxdepth` to `--max-depth` for consistency. Keep `--maxdepth` for
backwards compatibility.
* [FEATURE #978](https://github.com/BurntSushi/ripgrep/issues/978):
Add a `--pre` option to filter inputs with an arbitrary program.
* [FEATURE fca9709d](https://github.com/BurntSushi/ripgrep/commit/fca9709d):
Improve zsh completion.
Bug fixes:
* [BUG #135](https://github.com/BurntSushi/ripgrep/issues/135):
Release portable binaries that conditionally use SSSE3, AVX2, etc., at
runtime.
* [BUG #268](https://github.com/BurntSushi/ripgrep/issues/268):
Print descriptive error message when trying to use look-around or
backreferences.
* [BUG #395](https://github.com/BurntSushi/ripgrep/issues/395):
Show comprehensible error messages for regexes like `\s*{`.
* [BUG #526](https://github.com/BurntSushi/ripgrep/issues/526):
Support backslash escapes in globs.
* [BUG #795](https://github.com/BurntSushi/ripgrep/issues/795):
Fix problems with `--line-number-width` by removing it.
* [BUG #832](https://github.com/BurntSushi/ripgrep/issues/832):
Clarify usage instructions for `-f/--file` flag.
* [BUG #835](https://github.com/BurntSushi/ripgrep/issues/835):
Fix small performance regression while crawling very large directory trees.
* [BUG #851](https://github.com/BurntSushi/ripgrep/issues/851):
Fix `-S/--smart-case` detection once and for all.
* [BUG #852](https://github.com/BurntSushi/ripgrep/issues/852):
Be robust with respect to `ENOMEM` errors returned by `mmap`.
* [BUG #853](https://github.com/BurntSushi/ripgrep/issues/853):
Upgrade `grep` crate to `regex-syntax 0.5.0`.
* [BUG #893](https://github.com/BurntSushi/ripgrep/issues/893):
Improve support for git submodules.
* [BUG #900](https://github.com/BurntSushi/ripgrep/issues/900):
When no patterns are given, ripgrep should never match anything.
* [BUG #907](https://github.com/BurntSushi/ripgrep/issues/907):
ripgrep will now stop traversing after the first file when `--quiet --files`
is used.
* [BUG #918](https://github.com/BurntSushi/ripgrep/issues/918):
Don't skip tar archives when `-z/--search-zip` is used.
* [BUG #934](https://github.com/BurntSushi/ripgrep/issues/934):
Don't respect gitignore files when searching outside git repositories.
* [BUG #948](https://github.com/BurntSushi/ripgrep/issues/948):
Use exit code 2 to indicate error, and use exit code 1 to indicate no
matches.
* [BUG #951](https://github.com/BurntSushi/ripgrep/issues/951):
Add stdin example to ripgrep usage documentation.
* [BUG #955](https://github.com/BurntSushi/ripgrep/issues/955):
Use buffered writing when not printing to a tty, which fixes a performance
regression.
* [BUG #957](https://github.com/BurntSushi/ripgrep/issues/957):
Improve the error message shown for `--path separator /` in some Windows
shells.
* [BUG #964](https://github.com/BurntSushi/ripgrep/issues/964):
Add a `--no-fixed-strings` flag to disable `-F/--fixed-strings`.
* [BUG #988](https://github.com/BurntSushi/ripgrep/issues/988):
Fix a bug in the `ignore` crate that prevented the use of explicit ignore
files after disabling all other ignore rules.
* [BUG #995](https://github.com/BurntSushi/ripgrep/issues/995):
Respect `$XDG_CONFIG_DIR/git/config` for detecting `core.excludesFile`.
0.8.1 (2018-02-20)
==================
This is a patch release of ripgrep that primarily fixes regressions introduced
in 0.8.0 (#820 and #824) in directory traversal on Windows. These regressions
do not impact non-Windows users.
Feature enhancements:
* Added or improved file type filtering for csv and VHDL.
* [FEATURE #798](https://github.com/BurntSushi/ripgrep/issues/798):
Add `underline` support to `termcolor` and ripgrep. See documentation on the
`--colors` flag for details.
Bug fixes:
* [BUG #684](https://github.com/BurntSushi/ripgrep/issues/684):
Improve documentation for the `--ignore-file` flag.
* [BUG #789](https://github.com/BurntSushi/ripgrep/issues/789):
Don't show `(rev )` if the revision wasn't available during the build.
* [BUG #791](https://github.com/BurntSushi/ripgrep/issues/791):
Add man page to ARM release.
* [BUG #797](https://github.com/BurntSushi/ripgrep/issues/797):
Improve documentation for "intense" setting in `termcolor`.
* [BUG #800](https://github.com/BurntSushi/ripgrep/issues/800):
Fix a bug in the `ignore` crate for custom ignore files. This had no impact
on ripgrep.
* [BUG #807](https://github.com/BurntSushi/ripgrep/issues/807):
Fix a bug where `rg --hidden .` behaved differently from `rg --hidden ./`.
* [BUG #815](https://github.com/BurntSushi/ripgrep/issues/815):
Clarify a common failure mode in user guide.
* [BUG #820](https://github.com/BurntSushi/ripgrep/issues/820):
Fixes a bug on Windows where symlinks were followed even if not requested.
* [BUG #824](https://github.com/BurntSushi/ripgrep/issues/824):
Fix a performance regression in directory traversal on Windows.
0.8.0 (2018-02-11)
==================
This is a new minor version releae of ripgrep that satisfies several popular

240
Cargo.lock generated
View File

@@ -1,6 +1,6 @@
[[package]]
name = "aho-corasick"
version = "0.6.4"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -8,22 +8,25 @@ dependencies = [
[[package]]
name = "ansi_term"
version = "0.10.2"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "atty"
version = "0.2.6"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "bitflags"
version = "1.0.1"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@@ -31,25 +34,25 @@ name = "bytecount"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"simd 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"simd 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "cfg-if"
version = "0.1.2"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "clap"
version = "2.29.4"
version = "2.32.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ansi_term 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)",
"atty 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"ansi_term 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"strsim 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"textwrap 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-width 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"textwrap 0.10.0 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -59,11 +62,19 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "encoding_rs"
version = "0.7.2"
version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"simd 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"simd 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "encoding_rs_io"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"encoding_rs 0.8.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -76,7 +87,7 @@ name = "fuchsia-zircon"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -92,59 +103,59 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "globset"
version = "0.3.0"
version = "0.4.1"
dependencies = [
"aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)",
"aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"fnv 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)",
"glob 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "grep"
version = "0.1.8"
version = "0.1.9"
dependencies = [
"log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ignore"
version = "0.4.1"
version = "0.4.3"
dependencies = [
"crossbeam 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"globset 0.3.0",
"lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"globset 0.4.1",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"tempdir 0.3.7 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"walkdir 2.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "lazy_static"
version = "1.0.0"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.36"
version = "0.2.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.4.1"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -152,7 +163,7 @@ name = "memchr"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -160,8 +171,8 @@ name = "memmap"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -169,17 +180,7 @@ name = "num_cpus"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand"
version = "0.3.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -188,13 +189,13 @@ version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "redox_syscall"
version = "0.1.37"
version = "0.1.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@@ -202,48 +203,59 @@ name = "redox_termios"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex"
version = "0.2.6"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)",
"aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"simd 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex-syntax"
version = "0.4.2"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "remove_dir_all"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ripgrep"
version = "0.8.0"
version = "0.8.1"
dependencies = [
"atty 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"bytecount 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"clap 2.29.4 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs 0.7.2 (registry+https://github.com/rust-lang/crates.io-index)",
"globset 0.3.0",
"grep 0.1.8",
"ignore 0.4.1",
"lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"clap 2.32.0 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs 0.8.4 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs_io 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"globset 0.4.1",
"grep 0.1.9",
"ignore 0.4.3",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"memmap 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 0.3.5",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -251,12 +263,12 @@ name = "same-file"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "simd"
version = "0.2.1"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@@ -266,17 +278,19 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "tempdir"
version = "0.3.5"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand 0.3.22 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"remove_dir_all 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termcolor"
version = "0.3.5"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"wincolor 0.1.6",
"wincolor 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -284,17 +298,17 @@ name = "termion"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "textwrap"
version = "0.9.0"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"unicode-width 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -302,13 +316,18 @@ name = "thread_local"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ucd-util"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-width"
version = "0.1.4"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@@ -335,12 +354,12 @@ version = "2.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi"
version = "0.3.4"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -359,49 +378,54 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "wincolor"
version = "0.1.6"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[metadata]
"checksum aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)" = "d6531d44de723825aa81398a6415283229725a00fa30713812ab9323faa82fc4"
"checksum ansi_term 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6b3568b48b7cefa6b8ce125f9bb4989e52fbcc29ebea88df04cc7c5f12f70455"
"checksum atty 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)" = "8352656fd42c30a0c3c89d26dea01e3b77c0ab2af18230835c15e2e13cd51859"
"checksum bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b3c30d3802dfb7281680d6285f2ccdaa8c2d8fee41f93805dba5c4cf50dc23cf"
"checksum aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c1c6d463cbe7ed28720b5b489e7c083eeb8f90d08be2a0d6bb9e1ffea9ce1afa"
"checksum ansi_term 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ee49baf6cb617b853aa8d93bf420db2383fab46d314482ca2803b40d5fde979b"
"checksum atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "9a7d5b8723950951411ee34d271d99dddcc2035a16ab25310ea2c8cfd4369652"
"checksum bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)" = "d0c54bb8f454c567f21197eefcdbf5679d0bd99f2ddbe52e84c77061952e6789"
"checksum bytecount 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "882585cd7ec84e902472df34a5e01891202db3bf62614e1f0afe459c1afcf744"
"checksum cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "d4c819a1287eb618df47cc647173c5c4c66ba19d888a6e50d605672aed3140de"
"checksum clap 2.29.4 (registry+https://github.com/rust-lang/crates.io-index)" = "7b8f59bcebcfe4269b09f71dab0da15b355c75916a8f975d3876ce81561893ee"
"checksum cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "efe5c877e17a9c717a0bf3613b2709f723202c4e4675cc8f12926ded29bcb17e"
"checksum clap 2.32.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b957d88f4b6a63b9d70d5f454ac8011819c6efa7727858f458ab71c756ce2d3e"
"checksum crossbeam 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "24ce9782d4d5c53674646a6a4c1863a21a8fc0cb649b3c94dfc16e45071dea19"
"checksum encoding_rs 0.7.2 (registry+https://github.com/rust-lang/crates.io-index)" = "98fd0f24d1fb71a4a6b9330c8ca04cbd4e7cc5d846b54ca74ff376bc7c9f798d"
"checksum encoding_rs 0.8.4 (registry+https://github.com/rust-lang/crates.io-index)" = "88a1b66a0d28af4b03a8c8278c6dcb90e6e600d89c14500a9e7a02e64b9ee3ac"
"checksum encoding_rs_io 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "ad0ffe753ba194ef1bc070e8d61edaadb1536c05e364fc9178ca6cbde10922c4"
"checksum fnv 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)" = "2fad85553e09a6f881f739c29f0b00b0f01357c743266d478b68951ce23285f3"
"checksum fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "2e9763c69ebaae630ba35f74888db465e49e259ba1bc0eda7d06f4a067615d82"
"checksum fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3dcaa9ae7725d12cdb85b3ad99a434db70b468c09ded17e012d86b5c1010f7a7"
"checksum glob 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "8be18de09a56b60ed0edf84bc9df007e30040691af7acd1c41874faac5895bfb"
"checksum lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c8f31047daa365f19be14b47c29df4f7c3b581832407daabe6ae77397619237d"
"checksum libc 0.2.36 (registry+https://github.com/rust-lang/crates.io-index)" = "1e5d97d6708edaa407429faa671b942dc0f2727222fb6b6539bf1db936e4b121"
"checksum log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "89f010e843f2b1a31dbd316b3b8d443758bc634bed37aabade59c686d644e0a2"
"checksum lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "fb497c35d362b6a331cfd94956a07fc2c78a4604cdbee844a81170386b996dd3"
"checksum libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)" = "b685088df2b950fccadf07a7187c8ef846a959c142338a48f9dc0b94517eb5f1"
"checksum log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "61bd98ae7f7b754bc53dca7d44b604f733c6bba044ea6f41bc8d89272d8161d2"
"checksum memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "796fba70e76612589ed2ce7f45282f5af869e0fdd7cc6199fa1aa1f1d591ba9d"
"checksum memmap 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)" = "e2ffa2c986de11a9df78620c01eeaaf27d94d3ff02bf81bfcca953102dd0c6ff"
"checksum num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c51a3322e4bca9d212ad9a158a02abc6934d005490c054a2778df73a70aa0a30"
"checksum rand 0.3.22 (registry+https://github.com/rust-lang/crates.io-index)" = "15a732abf9d20f0ad8eeb6f909bf6868722d9a06e1e50802b6a70351f40b4eb1"
"checksum rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "eba5f8cb59cc50ed56be8880a5c7b496bfd9bd26394e176bc67884094145c2c5"
"checksum redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)" = "0d92eecebad22b767915e4d529f89f28ee96dbbf5a4810d2b844373f136417fd"
"checksum redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "c214e91d3ecf43e9a4e41e578973adeb14b474f2bee858742d127af75a0112b1"
"checksum redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7e891cfe48e9100a70a3b6eb652fef28920c117d366339687bd5576160db0f76"
"checksum regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)" = "5be5347bde0c48cfd8c3fdc0766cdfe9d8a755ef84d620d6794c778c91de8b2b"
"checksum regex-syntax 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "8e931c58b93d86f080c734bfd2bce7dd0079ae2331235818133c8be7f422e20e"
"checksum regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "5bbbea44c5490a1e84357ff28b7d518b4619a159fed5d25f6c1de2d19cc42814"
"checksum regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)" = "747ba3b235651f6e2f67dfa8bcdcd073ddb7c243cb21c442fc12395dfcac212d"
"checksum remove_dir_all 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "3488ba1b9a2084d38645c4c08276a1752dcbf2c7130d74f1569681ad5d2799c5"
"checksum same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "cfb6eded0b06a0b512c8ddbcf04089138c9b4362c2f696f3c3d76039d68f3637"
"checksum simd 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "3dd0805c7363ab51a829a1511ad24b6ed0349feaa756c4bc2f977f9f496e6673"
"checksum simd 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "ed3686dd9418ebcc3a26a0c0ae56deab0681e53fe899af91f5bbcee667ebffb1"
"checksum strsim 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "bb4f380125926a99e52bc279241539c018323fab05ad6368b56f93d9369ff550"
"checksum tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "87974a6f5c1dfb344d733055601650059a3363de2a6104819293baff662132d6"
"checksum tempdir 0.3.7 (registry+https://github.com/rust-lang/crates.io-index)" = "15f2b5fb00ccdf689e0149d1b1b3c03fead81c2b37735d812fa8bddbbf41b6d8"
"checksum termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "722426c4a0539da2c4ffd9b419d90ad540b4cff4a053be9069c908d4d07e2836"
"checksum termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "689a3bdfaab439fd92bc87df5c4c78417d3cbe537487274e9b0b2dce76e92096"
"checksum textwrap 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c0b59b6b4b44d867f1370ef1bd91bfb262bf07bf0ae65c202ea2fbc16153b693"
"checksum textwrap 0.10.0 (registry+https://github.com/rust-lang/crates.io-index)" = "307686869c93e71f94da64286f9a9524c0f308a9e1c87a583de8e9c9039ad3f6"
"checksum thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "279ef31c19ededf577bfd12dfae728040a21f635b06a24cd670ff510edd38963"
"checksum unicode-width 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "bf3a113775714a22dcb774d8ea3655c53a32debae63a063acc00a91cc586245f"
"checksum ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "fd2be2d6639d0f8fe6cdda291ad456e23629558d466e2789d2c3e9892bda285d"
"checksum unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "882386231c45df4700b275c7ff55b6f3698780a650026380e72dabe76fa46526"
"checksum unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "382810877fe448991dfc7f0dd6e3ae5d58088fd0ea5e35189655f84e6814fa56"
"checksum utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "662fab6525a98beff2921d7f61a39e7d59e0b425ebc7d0d9e66d316e55124122"
"checksum void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6a02e4885ed3bc0f2de90ea6dd45ebcbb66dacffe03547fadbb0eeae2770887d"
"checksum walkdir 2.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "63636bd0eb3d00ccb8b9036381b526efac53caf112b7783b730ab3f8e44da369"
"checksum winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "04e3bd221fcbe8a271359c04f21a76db7d0c6028862d1bb5512d85e1e2eb5bb3"
"checksum winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "773ef9dcc5f24b7d850d0ff101e542ff24c3b090a9768e03ff889fdef41f00fd"
"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
"checksum wincolor 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b9dc3aa9dcda98b5a16150c54619c1ead22e3d3a5d458778ae914be760aa981a"

View File

@@ -1,10 +1,11 @@
[package]
name = "ripgrep"
version = "0.8.0" #:version
version = "0.8.1" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
Line oriented search tool using Rust's regex library. Combines the raw
performance of grep with the usability of the silver searcher.
ripgrep is a line-oriented search tool that recursively searches your current
directory for a regex pattern while respecting your gitignore rules. ripgrep
has first class support on Windows, macOS and Linux
"""
documentation = "https://github.com/BurntSushi/ripgrep"
homepage = "https://github.com/BurntSushi/ripgrep"
@@ -12,9 +13,10 @@ repository = "https://github.com/BurntSushi/ripgrep"
readme = "README.md"
keywords = ["regex", "grep", "egrep", "search", "pattern"]
categories = ["command-line-utilities", "text-processing"]
license = "Unlicense/MIT"
license = "Unlicense OR MIT"
exclude = ["HomebrewFormula"]
build = "build.rs"
autotests = false
[badges]
travis-ci = { repository = "BurntSushi/ripgrep" }
@@ -30,13 +32,14 @@ name = "integration"
path = "tests/tests.rs"
[workspace]
members = ["grep", "globset", "ignore", "termcolor", "wincolor"]
members = ["grep", "globset", "ignore"]
[dependencies]
atty = "0.2.2"
atty = "0.2.11"
bytecount = "0.3.1"
encoding_rs = "0.7"
globset = { version = "0.3.0", path = "globset" }
encoding_rs = "0.8"
encoding_rs_io = "0.1"
globset = { version = "0.4.0", path = "globset" }
grep = { version = "0.1.8", path = "grep" }
ignore = { version = "0.4.0", path = "ignore" }
lazy_static = "1"
@@ -45,9 +48,9 @@ log = "0.4"
memchr = "2"
memmap = "0.6"
num_cpus = "1"
regex = "0.2.4"
regex = "1"
same-file = "1"
termcolor = { version = "0.3.4", path = "termcolor" }
termcolor = "1"
[dependencies.clap]
version = "2.29.4"
@@ -67,10 +70,11 @@ default-features = false
features = ["suggestions", "color"]
[features]
avx-accel = ["bytecount/avx-accel"]
avx-accel = [
"bytecount/avx-accel",
]
simd-accel = [
"bytecount/simd-accel",
"regex/simd-accel",
"encoding_rs/simd-accel",
]

134
FAQ.md
View File

@@ -20,7 +20,10 @@
* [How do I create an alias for ripgrep on Windows?](#rg-alias-windows)
* [How do I create a PowerShell profile?](#powershell-profile)
* [How do I pipe non-ASCII content to ripgrep on Windows?](#pipe-non-ascii-windows)
* [How can I search and replace with ripgrep?](#search-and-replace)
* [How is ripgrep licensed?](#license)
* [Can ripgrep replace grep?](#posix4ever)
* [What does the "rip" in ripgrep mean?](#intentcountsforsomething)
<h3 name="config">
@@ -132,7 +135,7 @@ How do I search compressed files?
</h3>
ripgrep's `-z/--search-zip` flag will cause it to search compressed files
automatically. Currently, this supports gzip, bzip2, lzma and xz only and
automatically. Currently, this supports gzip, bzip2, lzma, lz4 and xz only and
requires the corresponding `gzip`, `bzip2` and `xz` binaries to be installed on
your system. (That is, ripgrep does decompression by shelling out to another
process.)
@@ -468,6 +471,98 @@ that the console will use for printing to UTF-8 with
will also reset when PowerShell is restarted, so you can add that line
to your profile as well if you want to make the setting permanent.
<h3 name="search-and-replace">
How can I search and replace with ripgrep?
</h3>
Using ripgrep alone, you can't. ripgrep is a search tool that will never
touch your files. However, the output of ripgrep can be piped to other tools
that do modify files on disk. See
[this issue](https://github.com/BurntSushi/ripgrep/issues/74) for more
information.
sed is one such tool that can modify files on disk. sed can take a filename
and a substitution command to search and replace in the specified file.
Files containing matching patterns can be provided to sed using
```
rg foo --files-with-matches
```
The output of this command is a list of filenames that contain a match for
the `foo` pattern.
This list can be piped into `xargs`, which will split the filenames from
standard input into arguments for the command following xargs. You can use this
combination to pipe a list of filenames into sed for replacement. For example:
```
rg foo --files-with-matches | xargs sed -i 's/foo/bar/g'
```
will replace all instances of 'foo' with 'bar' in the files in which
ripgrep finds the foo pattern. The `-i` flag to sed indicates that you are
editing files in place, and `s/foo/bar/g` says that you are performing a
**s**ubstitution of the pattren `foo` for `bar`, and that you are doing this
substitution **g**lobally (all occurrences of the pattern in each file).
Note: the above command assumes that you are using GNU sed. If you are using
BSD sed (the default on macOS and FreeBSD) then you must modify the above
command to be the following:
```
rg foo --files-with-matches | xargs sed -i '' 's/foo/bar/g'
```
The `-i` flag in BSD sed requires a file extension to be given to make backups
for all modified files. Specifying the empty string prevents file backups from
being made.
Finally, if any of your file paths contain whitespace in them, then you might
need to delimit your file paths with a NUL terminator. This requires telling
ripgrep to output NUL bytes between each path, and telling xargs to read paths
delimited by NUL bytes:
```
rg foo --files-with-matches -0 | xargs -0 sed -i 's/foo/bar/g'
```
To learn more about sed, see the sed manual
[here](https://www.gnu.org/software/sed/manual/sed.html).
Additionally, Facebook has a tool called
[fastmod](https://github.com/facebookincubator/fastmod)
that uses some of the same libraries as ripgrep and might provide a more
ergonomic search-and-replace experience.
<h3 name="license">
How is ripgrep licensed?
</h3>
ripgrep is dual licensed under the
[Unlicense](https://unlicense.org/)
and MIT licenses. Specifically, you may use ripgrep under the terms of either
license.
The reason why ripgrep is dual licensed this way is two-fold:
1. I, as ripgrep's author, would like to participate in a small bit of
ideological activism by promoting the Unlicense's goal: to disclaim
copyright monopoly interest.
2. I, as ripgrep's author, would like as many people to use rigprep as
possible. Since the Unlicense is not a proven or well known license, ripgrep
is also offered under the MIT license, which is ubiquitous and accepted by
almost everyone.
More specifically, ripgrep and all its dependencies are compatible with this
licensing choice. In particular, ripgrep's dependencies (direct and transitive)
will always be limited to permissive licenses. That is, ripgrep will never
depend on code that is not permissively licensed. This means rejecting any
dependency that uses a copyleft license such as the GPL, LGPL, MPL or any of
the Creative Commons ShareAlike licenses. Whether the license is "weak"
copyleft or not does not matter; ripgrep will **not** depend on it.
<h3 name="posix4ever">
Can ripgrep replace grep?
@@ -530,3 +625,40 @@ for the previous section apply.
* Is there a particular feature of grep you rely on that ripgrep either doesn't
have or never will have? If the former, file a bug report, maybe ripgrep can
do it! If the latter, well, then, just use grep.
<h3 name="intentcountsforsomething">
What does the "rip" in ripgrep mean?
</h3>
When I first started writing ripgrep, I called it `rep`, intending it to be a
shorter variant of `grep`. Soon after, I renamed it to `xrep` since `rep`
wasn't obvious enough of a name for my taste. And also because adding `x` to
anything always makes it better, right?
Before ripgrep's first public release, I decided that I didn't like `xrep`. I
thought it was slightly awkward to type, and despite my previous praise of the
letter `x`, I kind of thought it was pretty lame. Being someone who really
likes Rust, I wanted to call it "rustgrep" or maybe "rgrep" for short. But I
thought that was just as lame, and maybe a little too in-your-face. But I
wanted to continue using `r` so I could at least pretend Rust had something to
do with it.
I spent a couple of days trying to think of very short words that began with
the letter `r` that were even somewhat related to the task of searching. I
don't remember how it popped into my head, but "rip" came up as something that
meant "fast," as in, "to rip through your text." The fact that RIP is also
an initialism for "Rest in Peace" (as in, "ripgrep kills grep") never really
dawned on me. Perhaps the coincidence is too striking to believe that, but
I didn't realize it until someone explicitly pointed it out to me after the
initial public release. I admit that I found it mildly amusing, but if I had
realized it myself before the public release, I probably would have pressed on
and chose a different name. Alas, renaming things after a release is hard, so I
decided to mush on.
Given the fact that
[ripgrep never was, is or will be a 100% drop-in replacement for
grep](#posix4ever),
ripgrep is neither actually a "grep killer" nor was it ever intended to be. It
certainly does eat into some of its use cases, but that's nothing that other
tools like ack or The Silver Searcher weren't already doing.

View File

@@ -539,6 +539,13 @@ $ cat $HOME/.ripgreprc
--type-add
web:*.{html,css,js}*
# Using glob patterns to include/exclude files or folders
--glob=!git/*
# or
--glob
!git/*
# Set the colors.
--colors=line:none
--colors=line:style:bold

View File

@@ -2,6 +2,12 @@
Replace this text with the output of `rg --version`.
#### How did you install ripgrep?
If you installed ripgrep with snap and are getting strange file permission or
file not found errors, then please do not file a bug. Instead, use one of the
Github binary releases.
#### What operating system are you using ripgrep on?
Replace this text with your operating system and version.

View File

@@ -9,7 +9,7 @@ ack and grep.
[![Linux build status](https://travis-ci.org/BurntSushi/ripgrep.svg?branch=master)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/ripgrep.svg)](https://crates.io/crates/ripgrep)
[![Crates.io](https://img.shields.io/crates/v/ripgrep.svg)](https://crates.io/crates/ripgrep)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
@@ -106,7 +106,10 @@ increases the times to `2.640s` for ripgrep and `10.277s` for GNU grep.
automatically detecting UTF-16 is provided. Other text encodings must be
specifically specified with the `-E/--encoding` flag.)
* ripgrep supports searching files compressed in a common format (gzip, xz,
lzma or bzip2 current) with the `-z/--search-zip` flag.
lzma, bzip2 or lz4) with the `-z/--search-zip` flag.
* ripgrep supports arbitrary input preprocessing filters which could be PDF
text extraction, less supported decompression, decrypting, automatic encoding
detection and so on.
In other words, use ripgrep if you like speed, filtering by default, fewer
bugs, and Unicode support.
@@ -151,7 +154,7 @@ Summarizing, ripgrep is fast because:
latter is better for large directories. ripgrep chooses the best searching
strategy for you automatically.
* Applies your ignore patterns in `.gitignore` files using a
[`RegexSet`](https://doc.rust-lang.org/regex/regex/struct.RegexSet.html).
[`RegexSet`](https://docs.rs/regex/1.0.0/regex/struct.RegexSet.html).
That means a single file path can be matched against multiple glob patterns
simultaneously.
* It uses a lock-free parallel recursive directory iterator, courtesy of
@@ -194,7 +197,14 @@ optimizations) by utilizing a custom tap:
```
$ brew tap burntsushi/ripgrep https://github.com/BurntSushi/ripgrep.git
$ brew install burntsushi/ripgrep/ripgrep-bin
$ brew install ripgrep-bin
```
If you're a **MacPorts** user, then you can install ripgrep from the
[official ports](https://www.macports.org/ports.php?by=name&substr=ripgrep):
```
$ sudo port install ripgrep
```
If you're a **Windows Chocolatey** user, then you can install ripgrep from the [official repo](https://chocolatey.org/packages/ripgrep):
@@ -203,6 +213,12 @@ If you're a **Windows Chocolatey** user, then you can install ripgrep from the [
$ choco install ripgrep
```
If you're a **Windows Scoop** user, then you can install ripgrep from the [official bucket](https://github.com/lukesampson/scoop/blob/master/bucket/ripgrep.json):
```
$ scoop install ripgrep
```
If you're an **Arch Linux** user, then you can install ripgrep from the official repos:
```
@@ -228,6 +244,12 @@ $ sudo dnf copr enable carlwgeorge/ripgrep
$ sudo dnf install ripgrep
```
If you're an **openSUSE Tumbleweed** user, you can install ripgrep from the [official repo](http://software.opensuse.org/package/ripgrep):
```
$ sudo zypper install ripgrep
```
If you're a **RHEL/CentOS 7** user, you can install ripgrep from [copr](https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/):
```
@@ -249,21 +271,35 @@ then ripgrep can be installed using a binary `.deb` file provided in each
ripgrep is not in the official Debian or Ubuntu repositories.
```
$ curl -LO https://github.com/BurntSushi/ripgrep/releases/download/0.8.0/ripgrep_0.8.0_amd64.deb
$ sudo dpkg -i ripgrep_0.8.0_amd64.deb
$ curl -LO https://github.com/BurntSushi/ripgrep/releases/download/0.8.1/ripgrep_0.8.1_amd64.deb
$ sudo dpkg -i ripgrep_0.8.1_amd64.deb
```
If you're an **Ubuntu** user, ripgrep can be installed from the `snap` store.
* Note that if you are using `16.04 LTS` or later, snap is already installed.
* For older versions you can install snap using
[this guide](https://docs.snapcraft.io/core/install-ubuntu).
(N.B. Various snaps for ripgrep on Ubuntu are also available, but none of them
seem to work right and generate a number of very strange bug reports that I
don't know how to fix and don't have the time to fix. Therefore, it is no
longer a recommended installation option.)
If you're a **FreeBSD** user, then you can install ripgrep from the [official ports](https://www.freshports.org/textproc/ripgrep/):
```
$ sudo snap install rg
# pkg install ripgrep
```
If you're an **OpenBSD** user, then you can install ripgrep from the [official ports](http://openports.se/textproc/ripgrep):
```
$ doas pkg_add ripgrep
```
If you're a **NetBSD** user, then you can install ripgrep from [pkgsrc](http://pkgsrc.se/textproc/ripgrep):
```
# pkgin install ripgrep
```
If you're a **Rust programmer**, ripgrep can be installed with `cargo`.
* Note that the minimum supported version of Rust for ripgrep is **1.20**,
* Note that the minimum supported version of Rust for ripgrep is **1.23.0**,
although ripgrep may work with older versions.
* Note that the binary may be bigger than expected because it contains debug
symbols. This is intentional. To remove debug symbols and therefore reduce
@@ -273,6 +309,9 @@ If you're a **Rust programmer**, ripgrep can be installed with `cargo`.
$ cargo install ripgrep
```
When compiling with Rust 1.27 or newer, this will automatically enable SIMD
optimizations for search.
ripgrep isn't currently in any other package repositories.
[I'd like to change that](https://github.com/BurntSushi/ripgrep/issues/10).
@@ -281,7 +320,7 @@ ripgrep isn't currently in any other package repositories.
ripgrep is written in Rust, so you'll need to grab a
[Rust installation](https://www.rust-lang.org/) in order to compile it.
ripgrep compiles with Rust 1.20 (stable) or newer. Building is easy:
ripgrep compiles with Rust 1.23.0 (stable) or newer. Building is easy:
```
$ git clone https://github.com/BurntSushi/ripgrep
@@ -292,14 +331,21 @@ $ ./target/release/rg --version
```
If you have a Rust nightly compiler and a recent Intel CPU, then you can enable
optional SIMD acceleration like so:
additional optional SIMD acceleration like so:
```
RUSTFLAGS="-C target-cpu=native" cargo build --release --features 'simd-accel avx-accel'
```
If your machine doesn't support AVX instructions, then simply remove
`avx-accel` from the features list. Similarly for SIMD.
`avx-accel` from the features list. Similarly for SIMD (which corresponds
roughly to SSE instructions).
The `simd-accel` and `avx-accel` features enable SIMD support in certain
ripgrep dependencies (responsible for counting lines and transcoding). They
are not necessary to get SIMD optimizations for search; those are enabled
automatically. Hopefully, some day, the `simd-accel` and `avx-accel` features
will similarly become unnecessary.
### Running tests

View File

@@ -50,7 +50,6 @@ test_script:
before_deploy:
# Generate artifacts for release
# TODO(burntsushi): How can we enable SSSE3 on Windows?
- cargo build --release
- mkdir staging
- copy target\release\rg.exe staging

View File

@@ -8,12 +8,7 @@ set -ex
# Generate artifacts for release
mk_artifacts() {
if is_ssse3_target; then
RUSTFLAGS="-C target-feature=+ssse3" \
cargo build --target "$TARGET" --release --features simd-accel
else
cargo build --target "$TARGET" --release
fi
cargo build --target "$TARGET" --release
}
mk_tarball() {

View File

@@ -27,7 +27,7 @@ install_osx_dependencies() {
return
fi
brew install asciidoc
brew install asciidoc docbook-xsl
}
configure_cargo() {

View File

@@ -1,41 +1,42 @@
#!/usr/bin/env zsh
emulate zsh -o extended_glob -o no_function_argzero -o no_unset
##
# Compares options in `rg --help` output to options in zsh completion function
emulate -R zsh
setopt extended_glob
setopt no_function_argzero
setopt no_unset
get_comp_args() {
# Technically there are many options that the completion system sets that
# our function may rely on, but we'll trust that we've got it mostly right
setopt local_options unset
# Our completion function recognises a special variable which tells it to
# dump the _arguments specs and then just return. But do this in a sub-shell
# anyway to avoid any weirdness
( _RG_COMPLETE_LIST_ARGS=1 source $1 )
return $?
}
main() {
local diff
local rg="${${0:a}:h}/../target/${TARGET:-}/release/rg"
local _rg="${${0:a}:h}/../complete/_rg"
local rg="${0:a:h}/../target/${TARGET:-}/release/rg"
local _rg="${0:a:h}/../complete/_rg"
local -a help_args comp_args
[[ -e $rg ]] || rg=${rg/%\/release\/rg/\/debug\/rg}
rg=${rg:a}
_rg=${_rg:a}
[[ -e $rg ]] || {
printf >&2 'File not found: %s\n' $rg
print -r >&2 "File not found: $rg"
return 1
}
[[ -e $_rg ]] || {
printf >&2 'File not found: %s\n' $_rg
print -r >&2 "File not found: $_rg"
return 1
}
printf 'Comparing options:\n-%s\n+%s\n' $rg $_rg
print -rl - 'Comparing options:' "-$rg" "+$_rg"
# 'Parse' options out of the `--help` output. To prevent false positives we
# only look at lines where the first non-white-space character is `-`
@@ -50,6 +51,8 @@ main() {
# 'Parse' options out of the completion function
comp_args=( ${(f)"$( get_comp_args $_rg )"} )
# Note that we currently exclude hidden (!...) options; matching these
# properly against the `--help` output could be irritating
comp_args=( ${comp_args#\(*\)} ) # Strip excluded options
comp_args=( ${comp_args#\*} ) # Strip repetition indicator
comp_args=( ${comp_args%%-[:[]*} ) # Strip everything after -optname-
@@ -57,14 +60,14 @@ main() {
comp_args=( ${comp_args##[^-]*} ) # Remove non-options
# This probably isn't necessary, but we should ensure the same order
comp_args=( ${(f)"$( printf '%s\n' $comp_args | sort -u )"} )
comp_args=( ${(f)"$( print -rl - $comp_args | sort -u )"} )
(( $#help_args )) || {
printf >&2 'Failed to get help_args\n'
print -r >&2 'Failed to get help_args'
return 1
}
(( $#comp_args )) || {
printf >&2 'Failed to get comp_args\n'
print -r >&2 'Failed to get comp_args'
return 1
}
@@ -73,12 +76,12 @@ main() {
diff -U2 \
--label '`rg --help`' \
--label '`_rg`' \
=( printf '%s\n' $help_args ) =( printf '%s\n' $comp_args )
=( print -rl - $help_args ) =( print -rl - $comp_args )
else
diff -U2 \
-L '`rg --help`' \
-L '`_rg`' \
=( printf '%s\n' $help_args ) =( printf '%s\n' $comp_args )
=( print -rl - $help_args ) =( print -rl - $comp_args )
fi
)"
@@ -91,4 +94,4 @@ main() {
return 0
}
main "${@}"
main "$@"

View File

@@ -6,160 +6,277 @@
# Run ci/test_complete.sh after building to ensure that the options supported by
# this function stay in synch with the `rg` binary.
#
# @see http://zsh.sourceforge.net/Doc/Release/Completion-System.html
# @see https://github.com/zsh-users/zsh/blob/master/Etc/completion-style-guide
#
# Based on code from the zsh-users project — see copyright notice below.
# Originally based on code from the zsh-users project — see copyright notice
# below.
_rg() {
local state_descr ret curcontext="${curcontext:-}"
local -a context line state
local -A opt_args val_args
local -a rg_args
local curcontext=$curcontext no='!' descr ret=1
local -a context line state state_descr args tmp suf
local -A opt_args
# Sort by long option name to match `rg --help`
rg_args=(
'(-A -C --after-context --context)'{-A+,--after-context=}'[specify lines to show after each match]:number of lines'
'(-B -C --before-context --context)'{-B+,--before-context=}'[specify lines to show before each match]:number of lines'
'(-i -s -S --ignore-case --case-sensitive --smart-case)'{-s,--case-sensitive}'[search case-sensitively]'
'--color=[specify when to use colors in output]:when:( never auto always ansi )'
'*--colors=[specify color settings and styles]: :->colorspec'
'--column[show column numbers]'
'(-A -B -C --after-context --before-context --context)'{-C+,--context=}'[specify lines to show before and after each match]:number of lines'
'--context-separator=[specify string used to separate non-continuous context lines in output]:separator'
'(-c --count --passthrough --passthru)'{-c,--count}'[only show count of matches for each file]'
'--debug[show debug messages]'
'--dfa-size-limit=[specify upper size limit of generated DFA]:DFA size'
'(-E --encoding)'{-E+,--encoding=}'[specify text encoding of files to search]: :_rg_encodings'
'*'{-f+,--file=}'[specify file containing patterns to search for]:file:_files'
"(1)--files[show each file that would be searched (but don't search)]"
'(-l --files-with-matches --files-without-match)'{-l,--files-with-matches}'[only show names of files with matches]'
'(-l --files-with-matches --files-without-match)--files-without-match[only show names of files without matches]'
'(-F --fixed-strings)'{-F,--fixed-strings}'[treat pattern as literal string instead of regular expression]'
'(-L --follow)'{-L,--follow}'[follow symlinks]'
'*'{-g+,--glob=}'[include or exclude files for searching that match the specified glob]:glob'
'(: -)'{-h,--help}'[display help information]'
'(-p --no-heading --pretty --vimgrep)--heading[show matches grouped by file name]'
# ripgrep has many options which negate the effect of a more common one — for
# example, `--no-column` to negate `--column`, and `--messages` to negate
# `--no-messages`. There are so many of these, and they're so infrequently
# used, that some users will probably find it irritating if they're completed
# indiscriminately, so let's not do that unless either the current prefix
# matches one of those negation options or the user has the `complete-all`
# style set. Note that this prefix check has to be updated manually to account
# for all of the potential negation options listed below!
if
# (--[imn]* => --ignore*, --messages, --no-*)
[[ $PREFIX$SUFFIX == --[imn]* ]] ||
zstyle -t ":complete:$curcontext:*" complete-all
then
no=
fi
# We make heavy use of argument groups here to prevent the option specs from
# growing unwieldy. These aren't supported in zsh <5.4, though, so we'll strip
# them out below if necessary. This makes the exclusions inaccurate on those
# older versions, but oh well — it's not that big a deal
args=(
+ '(exclusive)' # Misc. fully exclusive options
'(: * -)'{-h,--help}'[display help information]'
'(: * -)'{-V,--version}'[display version information]'
+ '(case)' # Case-sensitivity options
{-i,--ignore-case}'[search case-insensitively]'
{-s,--case-sensitive}'[search case-sensitively]'
{-S,--smart-case}'[search case-insensitively if pattern is all lowercase]'
+ '(context-a)' # Context (after) options
'(context-c)'{-A+,--after-context=}'[specify lines to show after each match]:number of lines'
+ '(context-b)' # Context (before) options
'(context-c)'{-B+,--before-context=}'[specify lines to show before each match]:number of lines'
+ '(context-c)' # Context (combined) options
'(context-a context-b)'{-C+,--context=}'[specify lines to show before and after each match]:number of lines'
+ '(column)' # Column options
'--column[show column numbers for matches]'
$no"--no-column[don't show column numbers for matches]"
+ '(count)' # Counting options
'(passthru)'{-c,--count}'[only show count of matching lines for each file]'
'(passthru)--count-matches[only show count of individual matches for each file]'
+ file # File-input options
'*'{-f+,--file=}'[specify file containing patterns to search for]: :_files'
+ '(file-match)' # Files with/without match options
'(stats)'{-l,--files-with-matches}'[only show names of files with matches]'
'(stats)--files-without-match[only show names of files without matches]'
+ '(file-name)' # File-name options
{-H,--with-filename}'[show file name for matches]'
"--no-filename[don't show file name for matches]"
+ '(fixed)' # Fixed-string options
{-F,--fixed-strings}'[treat pattern as literal string instead of regular expression]'
$no"--no-fixed-strings[don't treat pattern as literal string]"
+ '(follow)' # Symlink-following options
{-L,--follow}'[follow symlinks]'
$no"--no-follow[don't follow symlinks]"
+ glob # File-glob options
'*'{-g+,--glob=}'[include/exclude files matching specified glob]:glob'
'*--iglob=[include/exclude files matching specified case-insensitive glob]:glob'
+ '(heading)' # Heading options
'(pretty-vimgrep)--heading[show matches grouped by file name]'
"(pretty-vimgrep)--no-heading[don't show matches grouped by file name]"
+ '(hidden)' # Hidden-file options
'--hidden[search hidden files and directories]'
'*--iglob=[include or exclude files for searching that match the specified case-insensitive glob]:glob'
'(-i -s -S --case-sensitive --ignore-case --smart-case)'{-i,--ignore-case}'[search case-insensitively]'
'--ignore-file=[specify additional ignore file]:file:_files'
'(-v --invert-match)'{-v,--invert-match}'[invert matching]'
'(-n -N --line-number --no-line-number)'{-n,--line-number}'[show line numbers]'
'(-N --no-line-number)--line-number-width=[specify width of displayed line number]:number of columns'
'(-w -x --line-regexp --word-regexp)'{-x,--line-regexp}'[only show matches surrounded by line boundaries]'
'(-M --max-columns)'{-M+,--max-columns=}'[specify max length of lines to print]:number of bytes'
'(-m --max-count)'{-m+,--max-count=}'[specify max number of matches per file]:number of matches'
'--max-filesize=[specify size above which files should be ignored]:file size'
'--maxdepth=[specify max number of directories to descend]:number of directories'
'(--mmap --no-mmap)--mmap[search using memory maps when possible]'
'(-H --with-filename --no-filename)--no-filename[suppress all file names]'
"(-p --heading --pretty --vimgrep)--no-heading[don't group matches by file name]"
"--no-config[don't load configuration files]"
"(--no-ignore-parent)--no-ignore[don't respect ignore files]"
$no"--no-hidden[don't search hidden files and directories]"
+ '(ignore)' # Ignore-file options
"(--no-ignore-global --no-ignore-parent --no-ignore-vcs)--no-ignore[don't respect ignore files]"
$no'(--ignore-global --ignore-parent --ignore-vcs)--ignore[respect ignore files]'
+ '(ignore-global)' # Global ignore-file options
"--no-ignore-global[don't respect global ignore files]"
$no'--ignore-global[respect global ignore files]'
+ '(ignore-parent)' # Parent ignore-file options
"--no-ignore-parent[don't respect ignore files in parent directories]"
$no'--ignore-parent[respect ignore files in parent directories]'
+ '(ignore-vcs)' # VCS ignore-file options
"--no-ignore-vcs[don't respect version control ignore files]"
'(-n -N --line-number --no-line-number)'{-N,--no-line-number}'[suppress line numbers]'
'--no-messages[suppress all error messages]'
"(--mmap --no-mmap)--no-mmap[don't search using memory maps]"
'(-0 --null)'{-0,--null}'[print NUL byte after file names]'
'(-o -r --only-matching --passthrough --passthru --replace)'{-o,--only-matching}'[show only matching part of each line]'
'(-c -o -r --count --only-matching --passthrough --replace)--passthru[show both matching and non-matching lines]'
'!(-c -o -r --count --only-matching --passthru --replace)--passthrough'
'--path-separator=[specify path separator to use when printing file names]:separator'
'(-p --heading --no-heading --pretty --vimgrep)'{-p,--pretty}'[alias for --color=always --heading -n]'
'(-q --quiet)'{-q,--quiet}'[suppress normal output]'
'--regex-size-limit=[specify upper size limit of compiled regex]:regex size'
'(1 -f --file)*'{-e+,--regexp=}'[specify pattern]:pattern'
'(-c -o -r --count --only-matching --passthrough --passthru --replace)'{-r+,--replace=}'[specify string used to replace matches]:replace string'
'(-i -s -S --ignore-case --case-sensitive --smart-case)'{-S,--smart-case}'[search case-insensitively if the pattern is all lowercase]'
'(-j --threads)--sort-files[sort results by file path (disables parallelism)]'
'(-a --text)'{-a,--text}'[search binary files as if they were text]'
'(-j --sort-files --threads)'{-j+,--threads=}'[specify approximate number of threads to use]:number of threads'
$no'--ignore-vcs[respect version control ignore files]'
+ '(line)' # Line-number options
{-n,--line-number}'[show line numbers for matches]'
{-N,--no-line-number}"[don't show line numbers for matches]"
+ '(max-depth)' # Directory-depth options
'--max-depth=[specify max number of directories to descend]:number of directories'
'!--maxdepth=:number of directories'
+ '(messages)' # Error-message options
'(--no-ignore-messages)--no-messages[suppress some error messages]'
$no"--messages[don't suppress error messages affected by --no-messages]"
+ '(messages-ignore)' # Ignore-error message options
"--no-ignore-messages[don't show ignore-file parse error messages]"
$no'--ignore-messages[show ignore-file parse error messages]'
+ '(mmap)' # mmap options
'--mmap[search using memory maps when possible]'
"--no-mmap[don't search using memory maps]"
+ '(only)' # Only-match options
'(passthru replace)'{-o,--only-matching}'[show only matching part of each line]'
+ '(passthru)' # Pass-through options
'(--vimgrep count only replace)--passthru[show both matching and non-matching lines]'
'!(--vimgrep count only replace)--passthrough'
+ '(pre)' # Preprocessing options
'(-z --search-zip)--pre=[specify preprocessor utility]:preprocessor utility:_command_names -e'
$no'--no-pre[disable preprocessor utility]'
+ '(pretty-vimgrep)' # Pretty/vimgrep display options
'(heading)'{-p,--pretty}'[alias for --color=always --heading -n]'
'(heading passthru)--vimgrep[show results in vim-compatible format]'
+ regexp # Explicit pattern options
'(1 file)*'{-e+,--regexp=}'[specify pattern]:pattern'
+ '(replace)' # Replacement options
'(count only passthru)'{-r+,--replace=}'[specify string used to replace matches]:replace string'
+ '(sort)' # File-sorting options
'(threads)--sort-files[sort results by file path (disables parallelism)]'
$no"--no-sort-files[don't sort results by file path]"
+ stats # Statistics options
'(--files file-match)--stats[show search statistics]'
+ '(text)' # Binary-search options
{-a,--text}'[search binary files as if they were text]'
$no"--no-text[don't search binary files as if they were text]"
+ '(threads)' # Thread-count options
'(--sort-files)'{-j+,--threads=}'[specify approximate number of threads to use]:number of threads'
+ type # Type options
'*'{-t+,--type=}'[only search files matching specified type]: :_rg_types'
'*--type-add=[add new glob for file type]: :->typespec'
'*--type-add=[add new glob for specified file type]: :->typespec'
'*--type-clear=[clear globs previously defined for specified file type]: :_rg_types'
# This should actually be exclusive with everything but other type options
'(:)--type-list[show all supported file types and their associated globs]'
'*'{-T+,--type-not=}"[don't search files matching specified type]: :_rg_types"
'(: *)--type-list[show all supported file types and their associated globs]'
'*'{-T+,--type-not=}"[don't search files matching specified file type]: :_rg_types"
+ '(word-line)' # Whole-word/line match options
{-w,--word-regexp}'[only show matches surrounded by word boundaries]'
{-x,--line-regexp}'[only show matches surrounded by line boundaries]'
+ '(zip)' # Compression options
'(--pre)'{-z,--search-zip}'[search in compressed files]'
$no"--no-search-zip[don't search in compressed files]"
+ misc # Other options — no need to separate these at the moment
'(-b --byte-offset)'{-b,--byte-offset}'[show 0-based byte offset for each matching line]'
'--color=[specify when to use colors in output]:when:((
never\:"never use colors"
auto\:"use colors or not based on stdout, TERM, etc."
always\:"always use colors"
ansi\:"always use ANSI colors (even on Windows)"
))'
'*--colors=[specify color and style settings]: :->colorspec'
'--context-separator=[specify string used to separate non-continuous context lines in output]:separator'
'--debug[show debug messages]'
'--dfa-size-limit=[specify upper size limit of generated DFA]:DFA size (bytes)'
'(-E --encoding)'{-E+,--encoding=}'[specify text encoding of files to search]: :_rg_encodings'
"(1 stats)--files[show each file that would be searched (but don't search)]"
'*--ignore-file=[specify additional ignore file]:ignore file:_files'
'(-v --invert-match)'{-v,--invert-match}'[invert matching]'
'(-M --max-columns)'{-M+,--max-columns=}'[specify max length of lines to print]:number of bytes'
'(-m --max-count)'{-m+,--max-count=}'[specify max number of matches per file]:number of matches'
'--max-filesize=[specify size above which files should be ignored]:file size (bytes)'
"--no-config[don't load configuration files]"
'(-0 --null)'{-0,--null}'[print NUL byte after file names]'
'--path-separator=[specify path separator to use when printing file names]:separator'
'(-q --quiet)'{-q,--quiet}'[suppress normal output]'
'--regex-size-limit=[specify upper size limit of compiled regex]:regex size (bytes)'
'*'{-u,--unrestricted}'[reduce level of "smart" searching]'
'(: -)'{-V,--version}'[display version information]'
'(-p --heading --no-heading --pretty)--vimgrep[show results in vim-compatible format]'
'(-H --no-filename --with-filename)'{-H,--with-filename}'[display the file name for matches]'
'(-w -x --line-regexp --word-regexp)'{-w,--word-regexp}'[only show matches surrounded by word boundaries]'
'(-e -f --file --files --regexp --type-list)1: :_rg_pattern'
'(--type-list)*:file:_files'
'(-z --search-zip)'{-z,--search-zip}'[search in compressed files]'
+ operand # Operands
'(--files --type-list file regexp)1: :_guard "^-*" pattern'
'(--type-list)*: :_files'
)
[[ ${_RG_COMPLETE_LIST_ARGS:-} == (1|t*|y*) ]] && {
printf '%s\n' "${rg_args[@]}"
# This is used with test_complete.sh to verify that there are no options
# listed in the help output that aren't also defined here
[[ $_RG_COMPLETE_LIST_ARGS == (1|t*|y*) ]] && {
print -rl - $args
return 0
}
_arguments -s -S : "${rg_args[@]}" && return 0
# Strip out argument groups where unsupported (see above)
[[ $ZSH_VERSION == (4|5.<0-3>)(.*)# ]] &&
args=( ${(@)args:#(#i)(+|[a-z0-9][a-z0-9_-]#|\([a-z0-9][a-z0-9_-]#\))} )
while (( $#state )); do
case "${state[1]}" in
colorspec)
# @todo I don't like this because it allows you to do weird things like
# `line:line:bg:`. Also, i would like the `compadd -q` behaviour
[[ -prefix *:none: ]] && return 1
[[ -prefix *:*:*:* ]] && return 1
_arguments -C -s -S : $args && ret=0
_values -S ':' 'color/style type' \
'column[specify coloring for column numbers]: :->attribute' \
'line[specify coloring for line numbers]: :->attribute' \
'match[specify coloring for match text]: :->attribute' \
'path[specify color for file names]: :->attribute' && return 0
case $state in
colorspec)
if [[ ${IPREFIX#--*=}$PREFIX == [^:]# ]]; then
suf=( -qS: )
tmp=(
'column:specify coloring for column numbers'
'line:specify coloring for line numbers'
'match:specify coloring for match text'
'path:specify coloring for file names'
)
descr='color/style type'
elif [[ ${IPREFIX#--*=}$PREFIX == (column|line|match|path):[^:]# ]]; then
suf=( -qS: )
tmp=(
'none:clear color/style for type'
'bg:specify background color'
'fg:specify foreground color'
'style:specify text style'
)
descr='color/style attribute'
elif [[ ${IPREFIX#--*=}$PREFIX == [^:]##:(bg|fg):[^:]# ]]; then
tmp=( black blue green red cyan magenta yellow white )
descr='color name or r,g,b'
elif [[ ${IPREFIX#--*=}$PREFIX == [^:]##:style:[^:]# ]]; then
tmp=( {,no}bold {,no}intense {,no}underline )
descr='style name'
else
_message -e colorspec 'no more arguments'
fi
[[ "${state}" == 'attribute' ]] &&
_values -S ':' 'color/style attribute' \
'none[clear color/style for type]' \
'bg[specify background color]: :->color' \
'fg[specify foreground color]: :->color' \
'style[specify text style]: :->style' && return 0
(( $#tmp )) && {
compset -P '*:'
_describe -t colorspec $descr tmp $suf && ret=0
}
;;
[[ "${state}" == 'color' ]] &&
_values -S ':' 'color value' \
black blue green red cyan magenta yellow white && return 0
typespec)
if compset -P '[^:]##:include:'; then
_sequence -s , _rg_types && ret=0
# @todo This bit in particular could be better, but it's a little
# complex, and attempting to solve it seems to run us up against a crash
# bug — zsh # 40362
elif compset -P '[^:]##:'; then
_message 'glob or include directive' && ret=1
elif [[ ! -prefix *:* ]]; then
_rg_types -qS : && ret=0
fi
;;
esac
[[ "${state}" == 'style' ]] &&
_values -S ':' 'style value' \
bold nobold intense nointense underline nounderline && return 0
;;
typespec)
if compset -P '[^:]##:include:'; then
_sequence -s ',' _rg_types && return 0
# @todo This bit in particular could be better, but it's a little
# complex, and attempting to solve it seems to run us up against a crash
# bug — zsh # 40362
elif compset -P '[^:]##:'; then
_message 'glob or include directive' && return 1
elif [[ ! -prefix *:* ]]; then
_rg_types -qS ':' && return 0
fi
;;
esac
shift state
done
return 1
}
# zsh 5.1 refuses to complete options if a 'match-less' operand like our pattern
# could be 'completed' instead. We can use _guard() to avoid this problem, but
# it introduces another one: zsh won't print the message if we try to complete
# the pattern after having passed `--`. To work around *that* problem, we can
# use this function to bypass the _guard() when `--` is on the command line.
# This is inaccurate (it'd get confused by e.g. `rg -e --`), but zsh's handling
# of `--` isn't accurate anyway
_rg_pattern() {
if (( ${words[(I)--]} )); then
_message 'pattern'
else
_guard '^-*' 'pattern'
fi
return ret
}
# Complete encodings
@@ -195,7 +312,7 @@ _rg_encodings() {
x-user-defined auto
)
_wanted rg-encodings expl 'encoding' compadd -a "${@}" - _encodings
_wanted encodings expl encoding compadd -a "$@" - _encodings
}
# Complete file types
@@ -203,12 +320,12 @@ _rg_types() {
local -a expl
local -aU _types
_types=( ${${(f)"$( _call_program rg-types rg --type-list )"}%%:*} )
_types=( ${(@)${(f)"$( _call_program types rg --type-list )"}%%:*} )
_wanted rg-types expl 'file type' compadd -a "${@}" - _types
_wanted types expl 'file type' compadd -a "$@" - _types
}
_rg "${@}"
_rg "$@"
# ------------------------------------------------------------------------------
# Copyright (c) 2011 Github zsh-users - http://github.com/zsh-users

View File

@@ -12,12 +12,14 @@ Synopsis
*rg* [_OPTIONS_] *-e* _PATTERN_... [_PATH_...]
*rg* [_OPTIONS_] *-f* _PATH_... [_PATH_...]
*rg* [_OPTIONS_] *-f* _PATTERNFILE_... [_PATH_...]
*rg* [_OPTIONS_] *--files* [_PATH_...]
*rg* [_OPTIONS_] *--type-list*
*command* | *rg* [_OPTIONS_] _PATTERN_
*rg* [_OPTIONS_] *--help*
*rg* [_OPTIONS_] *--version*
@@ -98,6 +100,28 @@ would behave identically to the following command
rg --smart-case foo
another example is adding types
--type-add
web:*.{html,css,js}*
would behave identically to the following command
rg --type-add 'web:*.{html,css,js}*' foo
same with using globs
--glob=!git/*
or
--glob
!git/*
would behave identically to the following command
rg --glob '!git/*' foo
ripgrep also provides a flag, *--no-config*, that when present will suppress
any and all support for configuration. This includes any future support
for auto-loading configuration files from pre-determined paths.

View File

@@ -1,6 +1,6 @@
[package]
name = "globset"
version = "0.3.0" #:version
version = "0.4.1" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
Cross platform single glob and glob set matching. Glob set matching is the
@@ -23,10 +23,10 @@ aho-corasick = "0.6.0"
fnv = "1.0"
log = "0.4"
memchr = "2"
regex = "0.2.1"
regex = "1"
[dev-dependencies]
glob = "0.2"
[features]
simd-accel = ["regex/simd-accel"]
simd-accel = []

View File

@@ -20,7 +20,7 @@ Add this to your `Cargo.toml`:
```toml
[dependencies]
globset = "0.2"
globset = "0.3"
```
and this to your crate root:

View File

@@ -187,13 +187,26 @@ pub struct GlobBuilder<'a> {
opts: GlobOptions,
}
#[derive(Clone, Copy, Debug, Default, Eq, Hash, PartialEq)]
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
struct GlobOptions {
/// Whether to match case insensitively.
case_insensitive: bool,
/// Whether to require a literal separator to match a separator in a file
/// path. e.g., when enabled, `*` won't match `/`.
literal_separator: bool,
/// Whether or not to use `\` to escape special characters.
/// e.g., when enabled, `\*` will match a literal `*`.
backslash_escape: bool,
}
impl GlobOptions {
fn default() -> GlobOptions {
GlobOptions {
case_insensitive: false,
literal_separator: false,
backslash_escape: !is_separator('\\'),
}
}
}
#[derive(Clone, Debug, Default, Eq, PartialEq)]
@@ -262,6 +275,19 @@ impl Glob {
}
/// Returns the regular expression string for this glob.
///
/// Note that regular expressions for globs are intended to be matched on
/// arbitrary bytes (`&[u8]`) instead of Unicode strings (`&str`). In
/// particular, globs are frequently used on file paths, where there is no
/// general guarantee that file paths are themselves valid UTF-8. As a
/// result, callers will need to ensure that they are using a regex API
/// that can match on arbitrary bytes. For example, the
/// [`regex`](https://crates.io/regex)
/// crate's
/// [`Regex`](https://docs.rs/regex/*/regex/struct.Regex.html)
/// API is not suitable for this since it matches on `&str`, but its
/// [`bytes::Regex`](https://docs.rs/regex/*/regex/bytes/struct.Regex.html)
/// API is suitable for this.
pub fn regex(&self) -> &str {
&self.re
}
@@ -549,6 +575,7 @@ impl<'a> GlobBuilder<'a> {
chars: self.glob.chars().peekable(),
prev: None,
cur: None,
opts: &self.opts,
};
p.parse()?;
if p.stack.is_empty() {
@@ -585,6 +612,19 @@ impl<'a> GlobBuilder<'a> {
self.opts.literal_separator = yes;
self
}
/// When enabled, a back slash (`\`) may be used to escape
/// special characters in a glob pattern. Additionally, this will
/// prevent `\` from being interpreted as a path separator on all
/// platforms.
///
/// This is enabled by default on platforms where `\` is not a
/// path separator and disabled by default on platforms where `\`
/// is a path separator.
pub fn backslash_escape(&mut self, yes: bool) -> &mut GlobBuilder<'a> {
self.opts.backslash_escape = yes;
self
}
}
impl Tokens {
@@ -710,6 +750,7 @@ struct Parser<'a> {
chars: iter::Peekable<str::Chars<'a>>,
prev: Option<char>,
cur: Option<char>,
opts: &'a GlobOptions,
}
impl<'a> Parser<'a> {
@@ -726,14 +767,8 @@ impl<'a> Parser<'a> {
'{' => self.push_alternate()?,
'}' => self.pop_alternate()?,
',' => self.parse_comma()?,
c => {
if is_separator(c) {
// Normalize all patterns to use / as a separator.
self.push_token(Token::Literal('/'))?
} else {
self.push_token(Token::Literal(c))?
}
}
'\\' => self.parse_backslash()?,
c => self.push_token(Token::Literal(c))?,
}
}
Ok(())
@@ -786,6 +821,20 @@ impl<'a> Parser<'a> {
}
}
fn parse_backslash(&mut self) -> Result<(), Error> {
if self.opts.backslash_escape {
match self.bump() {
None => Err(self.error(ErrorKind::DanglingEscape)),
Some(c) => self.push_token(Token::Literal(c)),
}
} else if is_separator('\\') {
// Normalize all patterns to use / as a separator.
self.push_token(Token::Literal('/'))
} else {
self.push_token(Token::Literal('\\'))
}
}
fn parse_star(&mut self) -> Result<(), Error> {
let prev = self.prev;
if self.chars.peek() != Some(&'*') {
@@ -933,8 +982,9 @@ mod tests {
#[derive(Clone, Copy, Debug, Default)]
struct Options {
casei: bool,
litsep: bool,
casei: Option<bool>,
litsep: Option<bool>,
bsesc: Option<bool>,
}
macro_rules! syntax {
@@ -964,11 +1014,17 @@ mod tests {
($name:ident, $pat:expr, $re:expr, $options:expr) => {
#[test]
fn $name() {
let pat = GlobBuilder::new($pat)
.case_insensitive($options.casei)
.literal_separator($options.litsep)
.build()
.unwrap();
let mut builder = GlobBuilder::new($pat);
if let Some(casei) = $options.casei {
builder.case_insensitive(casei);
}
if let Some(litsep) = $options.litsep {
builder.literal_separator(litsep);
}
if let Some(bsesc) = $options.bsesc {
builder.backslash_escape(bsesc);
}
let pat = builder.build().unwrap();
assert_eq!(format!("(?-u){}", $re), pat.regex());
}
};
@@ -981,11 +1037,17 @@ mod tests {
($name:ident, $pat:expr, $path:expr, $options:expr) => {
#[test]
fn $name() {
let pat = GlobBuilder::new($pat)
.case_insensitive($options.casei)
.literal_separator($options.litsep)
.build()
.unwrap();
let mut builder = GlobBuilder::new($pat);
if let Some(casei) = $options.casei {
builder.case_insensitive(casei);
}
if let Some(litsep) = $options.litsep {
builder.literal_separator(litsep);
}
if let Some(bsesc) = $options.bsesc {
builder.backslash_escape(bsesc);
}
let pat = builder.build().unwrap();
let matcher = pat.compile_matcher();
let strategic = pat.compile_strategic_matcher();
let set = GlobSetBuilder::new().add(pat).build().unwrap();
@@ -1003,11 +1065,17 @@ mod tests {
($name:ident, $pat:expr, $path:expr, $options:expr) => {
#[test]
fn $name() {
let pat = GlobBuilder::new($pat)
.case_insensitive($options.casei)
.literal_separator($options.litsep)
.build()
.unwrap();
let mut builder = GlobBuilder::new($pat);
if let Some(casei) = $options.casei {
builder.case_insensitive(casei);
}
if let Some(litsep) = $options.litsep {
builder.literal_separator(litsep);
}
if let Some(bsesc) = $options.bsesc {
builder.backslash_escape(bsesc);
}
let pat = builder.build().unwrap();
let matcher = pat.compile_matcher();
let strategic = pat.compile_strategic_matcher();
let set = GlobSetBuilder::new().add(pat).build().unwrap();
@@ -1091,12 +1159,24 @@ mod tests {
syntaxerr!(err_range2, "[z--]", ErrorKind::InvalidRange('z', '-'));
const CASEI: Options = Options {
casei: true,
litsep: false,
casei: Some(true),
litsep: None,
bsesc: None,
};
const SLASHLIT: Options = Options {
casei: false,
litsep: true,
casei: None,
litsep: Some(true),
bsesc: None,
};
const NOBSESC: Options = Options {
casei: None,
litsep: None,
bsesc: Some(false),
};
const BSESC: Options = Options {
casei: None,
litsep: None,
bsesc: Some(true),
};
toregex!(re_casei, "a", "(?i)^a$", &CASEI);
@@ -1209,6 +1289,17 @@ mod tests {
#[cfg(not(unix))]
matches!(matchslash5, "abc\\def", "abc/def", SLASHLIT);
matches!(matchbackslash1, "\\[", "[", BSESC);
matches!(matchbackslash2, "\\?", "?", BSESC);
matches!(matchbackslash3, "\\*", "*", BSESC);
matches!(matchbackslash4, "\\[a-z]", "\\a", NOBSESC);
matches!(matchbackslash5, "\\?", "\\a", NOBSESC);
matches!(matchbackslash6, "\\*", "\\\\", NOBSESC);
#[cfg(unix)]
matches!(matchbackslash7, "\\a", "a");
#[cfg(not(unix))]
matches!(matchbackslash8, "\\a", "/a");
nmatches!(matchnot1, "a*b*c", "abcd");
nmatches!(matchnot2, "abc*abc*abc", "abcabcabcabcabcabcabca");
nmatches!(matchnot3, "some/**/needle.txt", "some/other/notthis.txt");
@@ -1253,13 +1344,20 @@ mod tests {
($which:ident, $name:ident, $pat:expr, $expect:expr) => {
extract!($which, $name, $pat, $expect, Options::default());
};
($which:ident, $name:ident, $pat:expr, $expect:expr, $opts:expr) => {
($which:ident, $name:ident, $pat:expr, $expect:expr, $options:expr) => {
#[test]
fn $name() {
let pat = GlobBuilder::new($pat)
.case_insensitive($opts.casei)
.literal_separator($opts.litsep)
.build().unwrap();
let mut builder = GlobBuilder::new($pat);
if let Some(casei) = $options.casei {
builder.case_insensitive(casei);
}
if let Some(litsep) = $options.litsep {
builder.literal_separator(litsep);
}
if let Some(bsesc) = $options.bsesc {
builder.backslash_escape(bsesc);
}
let pat = builder.build().unwrap();
assert_eq!($expect, pat.$which());
}
};

View File

@@ -91,6 +91,11 @@ Standard Unix-style glob syntax is supported:
`[!ab]` to match any character except for `a` and `b`.
* Metacharacters such as `*` and `?` can be escaped with character class
notation. e.g., `[*]` matches `*`.
* When backslash escapes are enabled, a backslash (`\`) will escape all meta
characters in a glob. If it precedes a non-meta character, then the slash is
ignored. A `\\` will match a literal `\\`. Note that this mode is only
enabled on Unix platforms by default, but can be enabled on any platform
via the `backslash_escape` setting on `Glob`.
A `GlobBuilder` can be used to prevent wildcards from matching path separators,
or to enable case insensitive matching.
@@ -154,8 +159,17 @@ pub enum ErrorKind {
/// Occurs when an alternating group is nested inside another alternating
/// group, e.g., `{{a,b},{c,d}}`.
NestedAlternates,
/// Occurs when an unescaped '\' is found at the end of a glob.
DanglingEscape,
/// An error associated with parsing or compiling a regex.
Regex(String),
/// Hints that destructuring should not be exhaustive.
///
/// This enum may grow additional variants, so this makes sure clients
/// don't count on exhaustive matching. (Otherwise, adding a new variant
/// could break existing code.)
#[doc(hidden)]
__Nonexhaustive,
}
impl StdError for Error {
@@ -199,7 +213,11 @@ impl ErrorKind {
ErrorKind::NestedAlternates => {
"nested alternate groups are not allowed"
}
ErrorKind::DanglingEscape => {
"dangling '\\'"
}
ErrorKind::Regex(ref err) => err,
ErrorKind::__Nonexhaustive => unreachable!(),
}
}
}
@@ -223,12 +241,14 @@ impl fmt::Display for ErrorKind {
| ErrorKind::UnopenedAlternates
| ErrorKind::UnclosedAlternates
| ErrorKind::NestedAlternates
| ErrorKind::DanglingEscape
| ErrorKind::Regex(_) => {
write!(f, "{}", self.description())
}
ErrorKind::InvalidRange(s, e) => {
write!(f, "invalid range; '{}' > '{}'", s, e)
}
ErrorKind::__Nonexhaustive => unreachable!(),
}
}
}
@@ -268,6 +288,14 @@ pub struct GlobSet {
}
impl GlobSet {
/// Create an empty `GlobSet`. An empty set matches nothing.
pub fn empty() -> GlobSet {
GlobSet {
len: 0,
strats: vec![],
}
}
/// Returns true if this set is empty, and therefore matches nothing.
pub fn is_empty(&self) -> bool {
self.len == 0
@@ -421,6 +449,7 @@ impl GlobSet {
/// GlobSetBuilder builds a group of patterns that can be used to
/// simultaneously match a file path.
#[derive(Clone, Debug)]
pub struct GlobSetBuilder {
pats: Vec<Glob>,
}

View File

@@ -1,6 +1,6 @@
[package]
name = "grep"
version = "0.1.8" #:version
version = "0.1.9" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
Fast line oriented regex searching as a library.
@@ -15,5 +15,5 @@ license = "Unlicense/MIT"
[dependencies]
log = "0.4"
memchr = "2"
regex = "0.2.1"
regex-syntax = "0.4.0"
regex = "1"
regex-syntax = "0.6"

View File

@@ -19,6 +19,7 @@ pub use search::{Grep, GrepBuilder, Iter, Match};
mod literals;
mod nonl;
mod search;
mod smart_case;
mod word_boundary;
/// Result is a convenient type alias that fixes the type of the error to

View File

@@ -10,10 +10,8 @@ principled.
use std::cmp;
use regex::bytes::RegexBuilder;
use syntax::{
Expr, Literals, Lit,
ByteClass, ByteRange, CharClass, ClassRange, Repeater,
};
use syntax::hir::{self, Hir, HirKind};
use syntax::hir::literal::{Literal, Literals};
#[derive(Clone, Debug)]
pub struct LiteralSets {
@@ -23,12 +21,12 @@ pub struct LiteralSets {
}
impl LiteralSets {
pub fn create(expr: &Expr) -> Self {
pub fn create(expr: &Hir) -> Self {
let mut required = Literals::empty();
union_required(expr, &mut required);
LiteralSets {
prefixes: expr.prefixes(),
suffixes: expr.suffixes(),
prefixes: Literals::prefixes(expr),
suffixes: Literals::suffixes(expr),
required: required,
}
}
@@ -69,6 +67,16 @@ impl LiteralSets {
lit = req;
}
// Special case: if we have any literals that are all whitespace,
// then this is probably a failing of the literal detection since
// whitespace is typically pretty common. In this case, don't bother
// with inner literal scanning at all and just defer to the regex.
let any_all_white = req_lits.iter()
.any(|lit| lit.iter().all(|&b| (b as char).is_whitespace()));
if any_all_white {
return None;
}
// Special case: if we detected an alternation of inner required
// literals and its longest literal is bigger than the longest
// prefix/suffix, then choose the alternation. In practice, this
@@ -93,60 +101,52 @@ impl LiteralSets {
}
}
fn union_required(expr: &Expr, lits: &mut Literals) {
use syntax::Expr::*;
match *expr {
Literal { ref chars, casei: false } => {
let s: String = chars.iter().cloned().collect();
lits.cross_add(s.as_bytes());
fn union_required(expr: &Hir, lits: &mut Literals) {
match *expr.kind() {
HirKind::Literal(hir::Literal::Unicode(c)) => {
let mut buf = [0u8; 4];
lits.cross_add(c.encode_utf8(&mut buf).as_bytes());
}
Literal { ref chars, casei: true } => {
for &c in chars {
let cls = CharClass::new(vec![
ClassRange { start: c, end: c },
]).case_fold();
if !lits.add_char_class(&cls) {
HirKind::Literal(hir::Literal::Byte(b)) => {
lits.cross_add(&[b]);
}
HirKind::Class(hir::Class::Unicode(ref cls)) => {
if count_unicode_class(cls) >= 5 || !lits.add_char_class(cls) {
lits.cut();
}
}
HirKind::Class(hir::Class::Bytes(ref cls)) => {
if count_byte_class(cls) >= 5 || !lits.add_byte_class(cls) {
lits.cut();
}
}
HirKind::Group(hir::Group { ref hir, .. }) => {
union_required(&**hir, lits);
}
HirKind::Repetition(ref x) => {
match x.kind {
hir::RepetitionKind::ZeroOrOne => lits.cut(),
hir::RepetitionKind::ZeroOrMore => lits.cut(),
hir::RepetitionKind::OneOrMore => {
union_required(&x.hir, lits);
lits.cut();
return;
}
hir::RepetitionKind::Range(ref rng) => {
let (min, max) = match *rng {
hir::RepetitionRange::Exactly(m) => (m, Some(m)),
hir::RepetitionRange::AtLeast(m) => (m, None),
hir::RepetitionRange::Bounded(m, n) => (m, Some(n)),
};
repeat_range_literals(
&x.hir, min, max, x.greedy, lits, union_required);
}
}
}
LiteralBytes { ref bytes, casei: false } => {
lits.cross_add(bytes);
HirKind::Concat(ref es) if es.is_empty() => {}
HirKind::Concat(ref es) if es.len() == 1 => {
union_required(&es[0], lits)
}
LiteralBytes { ref bytes, casei: true } => {
for &b in bytes {
let cls = ByteClass::new(vec![
ByteRange { start: b, end: b },
]).case_fold();
if !lits.add_byte_class(&cls) {
lits.cut();
return;
}
}
}
Class(_) => {
lits.cut();
}
ClassBytes(_) => {
lits.cut();
}
Group { ref e, .. } => {
union_required(&**e, lits);
}
Repeat { r: Repeater::ZeroOrOne, .. } => lits.cut(),
Repeat { r: Repeater::ZeroOrMore, .. } => lits.cut(),
Repeat { ref e, r: Repeater::OneOrMore, .. } => {
union_required(&**e, lits);
lits.cut();
}
Repeat { ref e, r: Repeater::Range { min, max }, greedy } => {
repeat_range_literals(
&**e, min, max, greedy, lits, union_required);
}
Concat(ref es) if es.is_empty() => {}
Concat(ref es) if es.len() == 1 => union_required(&es[0], lits),
Concat(ref es) => {
HirKind::Concat(ref es) => {
for e in es {
let mut lits2 = lits.to_empty();
union_required(e, &mut lits2);
@@ -157,7 +157,6 @@ fn union_required(expr: &Expr, lits: &mut Literals) {
if lits2.contains_empty() {
lits.cut();
}
// if !lits.union(lits2) {
if !lits.cross_product(&lits2) {
// If this expression couldn't yield any literal that
// could be extended, then we need to quit. Since we're
@@ -167,15 +166,15 @@ fn union_required(expr: &Expr, lits: &mut Literals) {
}
}
}
Alternate(ref es) => {
HirKind::Alternation(ref es) => {
alternate_literals(es, lits, union_required);
}
_ => lits.cut(),
}
}
fn repeat_range_literals<F: FnMut(&Expr, &mut Literals)>(
e: &Expr,
fn repeat_range_literals<F: FnMut(&Hir, &mut Literals)>(
e: &Hir,
min: u32,
max: Option<u32>,
_greedy: bool,
@@ -204,8 +203,8 @@ fn repeat_range_literals<F: FnMut(&Expr, &mut Literals)>(
}
}
fn alternate_literals<F: FnMut(&Expr, &mut Literals)>(
es: &[Expr],
fn alternate_literals<F: FnMut(&Hir, &mut Literals)>(
es: &[Hir],
lits: &mut Literals,
mut f: F,
) {
@@ -234,11 +233,21 @@ fn alternate_literals<F: FnMut(&Expr, &mut Literals)>(
}
lits.cut();
if !lcs.is_empty() {
lits.add(Lit::empty());
lits.add(Lit::new(lcs.to_vec()));
lits.add(Literal::empty());
lits.add(Literal::new(lcs.to_vec()));
}
}
/// Return the number of characters in the given class.
fn count_unicode_class(cls: &hir::ClassUnicode) -> u32 {
cls.iter().map(|r| 1 + (r.end() as u32 - r.start() as u32)).sum()
}
/// Return the number of bytes in the given class.
fn count_byte_class(cls: &hir::ClassBytes) -> u32 {
cls.iter().map(|r| 1 + (r.end() as u32 - r.start() as u32)).sum()
}
/// Converts an arbitrary sequence of bytes to a literal suitable for building
/// a regular expression.
fn bytes_to_regex(bs: &[u8]) -> String {

View File

@@ -1,4 +1,4 @@
use syntax::Expr;
use syntax::hir::{self, Hir, HirKind};
use {Error, Result};
@@ -9,59 +9,66 @@ use {Error, Result};
///
/// If `byte` is not an ASCII character (i.e., greater than `0x7F`), then this
/// function panics.
pub fn remove(expr: Expr, byte: u8) -> Result<Expr> {
// TODO(burntsushi): There is a bug in this routine where only `\n` is
// handled correctly. Namely, `AnyChar` and `AnyByte` need to be translated
// to proper character classes instead of the special `AnyCharNoNL` and
// `AnyByteNoNL` classes.
use syntax::Expr::*;
pub fn remove(expr: Hir, byte: u8) -> Result<Hir> {
assert!(byte <= 0x7F);
let chr = byte as char;
assert!(chr.len_utf8() == 1);
Ok(match expr {
Literal { chars, casei } => {
if chars.iter().position(|&c| c == chr).is_some() {
Ok(match expr.into_kind() {
HirKind::Empty => Hir::empty(),
HirKind::Literal(hir::Literal::Unicode(c)) => {
if c == chr {
return Err(Error::LiteralNotAllowed(chr));
}
Literal { chars: chars, casei: casei }
Hir::literal(hir::Literal::Unicode(c))
}
LiteralBytes { bytes, casei } => {
if bytes.iter().position(|&b| b == byte).is_some() {
HirKind::Literal(hir::Literal::Byte(b)) => {
if b as char == chr {
return Err(Error::LiteralNotAllowed(chr));
}
LiteralBytes { bytes: bytes, casei: casei }
Hir::literal(hir::Literal::Byte(b))
}
AnyChar => AnyCharNoNL,
AnyByte => AnyByteNoNL,
Class(mut cls) => {
cls.remove(chr);
Class(cls)
}
ClassBytes(mut cls) => {
cls.remove(byte);
ClassBytes(cls)
}
Group { e, i, name } => {
Group {
e: Box::new(remove(*e, byte)?),
i: i,
name: name,
HirKind::Class(hir::Class::Unicode(mut cls)) => {
let remove = hir::ClassUnicode::new(Some(
hir::ClassUnicodeRange::new(chr, chr),
));
cls.difference(&remove);
if cls.iter().next().is_none() {
return Err(Error::LiteralNotAllowed(chr));
}
Hir::class(hir::Class::Unicode(cls))
}
Repeat { e, r, greedy } => {
Repeat {
e: Box::new(remove(*e, byte)?),
r: r,
greedy: greedy,
HirKind::Class(hir::Class::Bytes(mut cls)) => {
let remove = hir::ClassBytes::new(Some(
hir::ClassBytesRange::new(byte, byte),
));
cls.difference(&remove);
if cls.iter().next().is_none() {
return Err(Error::LiteralNotAllowed(chr));
}
Hir::class(hir::Class::Bytes(cls))
}
Concat(exprs) => {
Concat(exprs.into_iter().map(|e| remove(e, byte)).collect::<Result<Vec<Expr>>>()?)
HirKind::Anchor(x) => Hir::anchor(x),
HirKind::WordBoundary(x) => Hir::word_boundary(x),
HirKind::Repetition(mut x) => {
x.hir = Box::new(remove(*x.hir, byte)?);
Hir::repetition(x)
}
Alternate(exprs) => {
Alternate(exprs.into_iter().map(|e| remove(e, byte)).collect::<Result<Vec<Expr>>>()?)
HirKind::Group(mut x) => {
x.hir = Box::new(remove(*x.hir, byte)?);
Hir::group(x)
}
HirKind::Concat(xs) => {
let xs = xs.into_iter()
.map(|e| remove(e, byte))
.collect::<Result<Vec<Hir>>>()?;
Hir::concat(xs)
}
HirKind::Alternation(xs) => {
let xs = xs.into_iter()
.map(|e| remove(e, byte))
.collect::<Result<Vec<Hir>>>()?;
Hir::alternation(xs)
}
e => e,
})
}

View File

@@ -1,10 +1,11 @@
use memchr::{memchr, memrchr};
use syntax::ParserBuilder;
use syntax::hir::Hir;
use regex::bytes::{Regex, RegexBuilder};
use syntax;
use literals::LiteralSets;
use nonl;
use syntax::Expr;
use smart_case::Cased;
use word_boundary::strip_unicode_word_boundaries;
use Result;
@@ -166,7 +167,7 @@ impl GrepBuilder {
/// Creates a new regex from the given expression with the current
/// configuration.
fn regex(&self, expr: &Expr) -> Result<Regex> {
fn regex(&self, expr: &Hir) -> Result<Regex> {
let mut builder = RegexBuilder::new(&expr.to_string());
builder.unicode(true);
self.regex_build(builder)
@@ -184,15 +185,16 @@ impl GrepBuilder {
/// Parses the underlying pattern and ensures the pattern can never match
/// the line terminator.
fn parse(&self) -> Result<syntax::Expr> {
let expr =
syntax::ExprBuilder::new()
.allow_bytes(true)
.unicode(true)
fn parse(&self) -> Result<Hir> {
let expr = ParserBuilder::new()
.allow_invalid_utf8(true)
.case_insensitive(self.is_case_insensitive()?)
.multi_line(true)
.build()
.parse(&self.pattern)?;
debug!("original regex HIR pattern:\n{}", expr);
let expr = nonl::remove(expr, self.opts.line_terminator)?;
debug!("regex ast:\n{:#?}", expr);
debug!("transformed regex HIR pattern:\n{}", expr);
Ok(expr)
}
@@ -204,7 +206,11 @@ impl GrepBuilder {
if !self.opts.case_smart {
return Ok(false);
}
Ok(!has_uppercase_literal(&self.pattern))
let cased = match Cased::from_pattern(&self.pattern) {
None => return Ok(false),
Some(cased) => cased,
};
Ok(cased.any_literal && !cased.any_uppercase)
}
}
@@ -310,44 +316,15 @@ impl<'b, 's> Iterator for Iter<'b, 's> {
}
}
/// Determine whether the pattern contains an uppercase character which should
/// negate the effect of the smart-case option.
///
/// Ideally we would be able to check the AST in order to correctly handle
/// things like '\p{Ll}' and '\p{Lu}' (which should be treated as explicitly
/// cased), but we don't currently have that option. For now, our 'good enough'
/// solution is to simply perform a semi-naïve scan of the input pattern and
/// ignore all characters following a '\'. The ExprBuilder will handle any
/// actual errors, and this at least lets us support the most common cases,
/// like 'foo\w' and 'foo\S', in an intuitive manner.
fn has_uppercase_literal(pattern: &str) -> bool {
let mut chars = pattern.chars();
while let Some(c) = chars.next() {
if c == '\\' {
chars.next();
} else if c.is_uppercase() {
return true;
}
}
false
}
#[cfg(test)]
mod tests {
#![allow(unused_imports)]
use memchr::{memchr, memrchr};
use regex::bytes::Regex;
use super::{GrepBuilder, Match, has_uppercase_literal};
use super::{GrepBuilder, Match};
static SHERLOCK: &'static [u8] = include_bytes!("./data/sherlock.txt");
#[allow(dead_code)]
fn s(bytes: &[u8]) -> String {
String::from_utf8(bytes.to_vec()).unwrap()
}
fn find_lines(pat: &str, haystack: &[u8]) -> Vec<Match> {
let re = Regex::new(pat).unwrap();
let mut lines = vec![];
@@ -376,20 +353,4 @@ mod tests {
assert_eq!(expected.len(), got.len());
assert_eq!(expected, got);
}
#[test]
fn pattern_case() {
assert_eq!(has_uppercase_literal(&"".to_string()), false);
assert_eq!(has_uppercase_literal(&"foo".to_string()), false);
assert_eq!(has_uppercase_literal(&"Foo".to_string()), true);
assert_eq!(has_uppercase_literal(&"foO".to_string()), true);
assert_eq!(has_uppercase_literal(&"foo\\\\".to_string()), false);
assert_eq!(has_uppercase_literal(&"foo\\w".to_string()), false);
assert_eq!(has_uppercase_literal(&"foo\\S".to_string()), false);
assert_eq!(has_uppercase_literal(&"foo\\p{Ll}".to_string()), true);
assert_eq!(has_uppercase_literal(&"foo[a-z]".to_string()), false);
assert_eq!(has_uppercase_literal(&"foo[A-Z]".to_string()), true);
assert_eq!(has_uppercase_literal(&"foo[\\S\\t]".to_string()), false);
assert_eq!(has_uppercase_literal(&"foo\\\\S".to_string()), true);
}
}

191
grep/src/smart_case.rs Normal file
View File

@@ -0,0 +1,191 @@
use syntax::ast::{self, Ast};
use syntax::ast::parse::Parser;
/// The results of analyzing a regex for cased literals.
#[derive(Clone, Debug, Default)]
pub struct Cased {
/// True if and only if a literal uppercase character occurs in the regex.
///
/// A regex like `\pL` contains no uppercase literals, even though `L`
/// is uppercase and the `\pL` class contains uppercase characters.
pub any_uppercase: bool,
/// True if and only if the regex contains any literal at all. A regex like
/// `\pL` has this set to false.
pub any_literal: bool,
}
impl Cased {
/// Returns a `Cased` value by doing analysis on the AST of `pattern`.
///
/// If `pattern` is not a valid regular expression, then `None` is
/// returned.
pub fn from_pattern(pattern: &str) -> Option<Cased> {
Parser::new()
.parse(pattern)
.map(|ast| Cased::from_ast(&ast))
.ok()
}
fn from_ast(ast: &Ast) -> Cased {
let mut cased = Cased::default();
cased.from_ast_impl(ast);
cased
}
fn from_ast_impl(&mut self, ast: &Ast) {
if self.done() {
return;
}
match *ast {
Ast::Empty(_)
| Ast::Flags(_)
| Ast::Dot(_)
| Ast::Assertion(_)
| Ast::Class(ast::Class::Unicode(_))
| Ast::Class(ast::Class::Perl(_)) => {}
Ast::Literal(ref x) => {
self.from_ast_literal(x);
}
Ast::Class(ast::Class::Bracketed(ref x)) => {
self.from_ast_class_set(&x.kind);
}
Ast::Repetition(ref x) => {
self.from_ast_impl(&x.ast);
}
Ast::Group(ref x) => {
self.from_ast_impl(&x.ast);
}
Ast::Alternation(ref alt) => {
for x in &alt.asts {
self.from_ast_impl(x);
}
}
Ast::Concat(ref alt) => {
for x in &alt.asts {
self.from_ast_impl(x);
}
}
}
}
fn from_ast_class_set(&mut self, ast: &ast::ClassSet) {
if self.done() {
return;
}
match *ast {
ast::ClassSet::Item(ref item) => {
self.from_ast_class_set_item(item);
}
ast::ClassSet::BinaryOp(ref x) => {
self.from_ast_class_set(&x.lhs);
self.from_ast_class_set(&x.rhs);
}
}
}
fn from_ast_class_set_item(&mut self, ast: &ast::ClassSetItem) {
if self.done() {
return;
}
match *ast {
ast::ClassSetItem::Empty(_)
| ast::ClassSetItem::Ascii(_)
| ast::ClassSetItem::Unicode(_)
| ast::ClassSetItem::Perl(_) => {}
ast::ClassSetItem::Literal(ref x) => {
self.from_ast_literal(x);
}
ast::ClassSetItem::Range(ref x) => {
self.from_ast_literal(&x.start);
self.from_ast_literal(&x.end);
}
ast::ClassSetItem::Bracketed(ref x) => {
self.from_ast_class_set(&x.kind);
}
ast::ClassSetItem::Union(ref union) => {
for x in &union.items {
self.from_ast_class_set_item(x);
}
}
}
}
fn from_ast_literal(&mut self, ast: &ast::Literal) {
self.any_literal = true;
self.any_uppercase = self.any_uppercase || ast.c.is_uppercase();
}
/// Returns true if and only if the attributes can never change no matter
/// what other AST it might see.
fn done(&self) -> bool {
self.any_uppercase && self.any_literal
}
}
#[cfg(test)]
mod tests {
use super::*;
fn cased(pattern: &str) -> Cased {
Cased::from_pattern(pattern).unwrap()
}
#[test]
fn various() {
let x = cased("");
assert!(!x.any_uppercase);
assert!(!x.any_literal);
let x = cased("foo");
assert!(!x.any_uppercase);
assert!(x.any_literal);
let x = cased("Foo");
assert!(x.any_uppercase);
assert!(x.any_literal);
let x = cased("foO");
assert!(x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo\\");
assert!(!x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo\w");
assert!(!x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo\S");
assert!(!x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo\p{Ll}");
assert!(!x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo[a-z]");
assert!(!x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo[A-Z]");
assert!(x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo[\S\t]");
assert!(!x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"foo\\S");
assert!(x.any_uppercase);
assert!(x.any_literal);
let x = cased(r"\p{Ll}");
assert!(!x.any_uppercase);
assert!(!x.any_literal);
let x = cased(r"aBc\w");
assert!(x.any_uppercase);
assert!(x.any_literal);
}
}

View File

@@ -1,4 +1,4 @@
use syntax::Expr;
use syntax::hir::{self, Hir, HirKind};
/// Strips Unicode word boundaries from the given expression.
///
@@ -8,7 +8,7 @@ use syntax::Expr;
/// false negatives.
///
/// If no word boundaries could be stripped, then None is returned.
pub fn strip_unicode_word_boundaries(expr: &Expr) -> Option<Expr> {
pub fn strip_unicode_word_boundaries(expr: &Hir) -> Option<Hir> {
// The real reason we do this is because Unicode word boundaries are the
// one thing that Rust's regex DFA engine can't handle. When it sees a
// Unicode word boundary among non-ASCII text, it falls back to one of the
@@ -16,23 +16,24 @@ pub fn strip_unicode_word_boundaries(expr: &Expr) -> Option<Expr> {
// a regex to find candidate matches without a Unicode word boundary. We'll
// only then use the full (and slower) regex to confirm a candidate as a
// match or not during search.
use syntax::Expr::*;
match *expr {
Concat(ref es) if !es.is_empty() => {
//
// It looks like we only check the outer edges for `\b`? I guess this is
// an attempt to optimize for the `-w/--word-regexp` flag? ---AG
match *expr.kind() {
HirKind::Concat(ref es) if !es.is_empty() => {
let first = is_unicode_word_boundary(&es[0]);
let last = is_unicode_word_boundary(es.last().unwrap());
// Be careful not to strip word boundaries if there are no other
// expressions to match.
match (first, last) {
(true, false) if es.len() > 1 => {
Some(Concat(es[1..].to_vec()))
Some(Hir::concat(es[1..].to_vec()))
}
(false, true) if es.len() > 1 => {
Some(Concat(es[..es.len() - 1].to_vec()))
Some(Hir::concat(es[..es.len() - 1].to_vec()))
}
(true, true) if es.len() > 2 => {
Some(Concat(es[1..es.len() - 1].to_vec()))
Some(Hir::concat(es[1..es.len() - 1].to_vec()))
}
_ => None,
}
@@ -42,13 +43,11 @@ pub fn strip_unicode_word_boundaries(expr: &Expr) -> Option<Expr> {
}
/// Returns true if the given expression is a Unicode word boundary.
fn is_unicode_word_boundary(expr: &Expr) -> bool {
use syntax::Expr::*;
match *expr {
WordBoundary => true,
NotWordBoundary => true,
Group { ref e, .. } => is_unicode_word_boundary(e),
fn is_unicode_word_boundary(expr: &Hir) -> bool {
match *expr.kind() {
HirKind::WordBoundary(hir::WordBoundary::Unicode) => true,
HirKind::WordBoundary(hir::WordBoundary::UnicodeNegate) => true,
HirKind::Group(ref x) => is_unicode_word_boundary(&x.hir),
_ => false,
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ignore"
version = "0.4.1" #:version
version = "0.4.3" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
A fast library for efficiently matching ignore files such as `.gitignore`
@@ -19,11 +19,11 @@ bench = false
[dependencies]
crossbeam = "0.3"
globset = { version = "0.3.0", path = "../globset" }
globset = { version = "0.4.0", path = "../globset" }
lazy_static = "1"
log = "0.4"
memchr = "2"
regex = "0.2.1"
regex = "1"
same-file = "1"
thread_local = "0.3.2"
walkdir = "2"

View File

@@ -20,7 +20,7 @@ Add this to your `Cargo.toml`:
```toml
[dependencies]
ignore = "0.3"
ignore = "0.4"
```
and this to your crate root:

View File

@@ -65,6 +65,8 @@ struct IgnoreOptions {
hidden: bool,
/// Whether to read .ignore files.
ignore: bool,
/// Whether to respect any ignore files in parent directories.
parents: bool,
/// Whether to read git's global gitignore file.
git_global: bool,
/// Whether to read .gitignore files.
@@ -151,6 +153,15 @@ impl Ignore {
&self,
path: P,
) -> (Ignore, Option<Error>) {
if !self.0.opts.parents
&& !self.0.opts.git_ignore
&& !self.0.opts.git_exclude
&& !self.0.opts.git_global
{
// If we never need info from parent directories, then don't do
// anything.
return (self.clone(), None);
}
if !self.is_root() {
panic!("Ignore::add_parents called on non-root matcher");
}
@@ -183,6 +194,7 @@ impl Ignore {
errs.maybe_push(err);
igtmp.is_absolute_parent = true;
igtmp.absolute_base = Some(absolute_base.clone());
igtmp.has_git = parent.join(".git").exists();
ig = Ignore(Arc::new(igtmp));
compiled.insert(parent.as_os_str().to_os_string(), ig.clone());
}
@@ -209,7 +221,9 @@ impl Ignore {
fn add_child_path(&self, dir: &Path) -> (IgnoreInner, Option<Error>) {
let mut errs = PartialErrorBuilder::default();
let custom_ig_matcher =
{
if self.0.custom_ignore_filenames.is_empty() {
Gitignore::empty()
} else {
let (m, err) =
create_gitignore(&dir, &self.0.custom_ignore_filenames);
errs.maybe_push(err);
@@ -254,7 +268,7 @@ impl Ignore {
git_global_matcher: self.0.git_global_matcher.clone(),
git_ignore_matcher: gi_matcher,
git_exclude_matcher: gi_exclude_matcher,
has_git: dir.join(".git").is_dir(),
has_git: dir.join(".git").exists(),
opts: self.0.opts,
};
(ig, errs.into_error_option())
@@ -264,9 +278,11 @@ impl Ignore {
fn has_any_ignore_rules(&self) -> bool {
let opts = self.0.opts;
let has_custom_ignore_files = !self.0.custom_ignore_filenames.is_empty();
let has_explicit_ignores = !self.0.explicit_ignores.is_empty();
opts.ignore || opts.git_global || opts.git_ignore
|| opts.git_exclude || has_custom_ignore_files
|| has_explicit_ignores
}
/// Returns a match indicating whether the given file path should be
@@ -329,6 +345,7 @@ impl Ignore {
) -> Match<IgnoreMatch<'a>> {
let (mut m_custom_ignore, mut m_ignore, mut m_gi, mut m_gi_exclude, mut m_explicit) =
(Match::None, Match::None, Match::None, Match::None, Match::None);
let any_git = self.parents().any(|ig| ig.0.has_git);
let mut saw_git = false;
for ig in self.parents().take_while(|ig| !ig.0.is_absolute_parent) {
if m_custom_ignore.is_none() {
@@ -341,42 +358,44 @@ impl Ignore {
ig.0.ignore_matcher.matched(path, is_dir)
.map(IgnoreMatch::gitignore);
}
if !saw_git && m_gi.is_none() {
if any_git && !saw_git && m_gi.is_none() {
m_gi =
ig.0.git_ignore_matcher.matched(path, is_dir)
.map(IgnoreMatch::gitignore);
}
if !saw_git && m_gi_exclude.is_none() {
if any_git && !saw_git && m_gi_exclude.is_none() {
m_gi_exclude =
ig.0.git_exclude_matcher.matched(path, is_dir)
.map(IgnoreMatch::gitignore);
}
saw_git = saw_git || ig.0.has_git;
}
if let Some(abs_parent_path) = self.absolute_base() {
let path = abs_parent_path.join(path);
for ig in self.parents().skip_while(|ig|!ig.0.is_absolute_parent) {
if m_custom_ignore.is_none() {
m_custom_ignore =
ig.0.custom_ignore_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
if self.0.opts.parents {
if let Some(abs_parent_path) = self.absolute_base() {
let path = abs_parent_path.join(path);
for ig in self.parents().skip_while(|ig|!ig.0.is_absolute_parent) {
if m_custom_ignore.is_none() {
m_custom_ignore =
ig.0.custom_ignore_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
}
if m_ignore.is_none() {
m_ignore =
ig.0.ignore_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
}
if any_git && !saw_git && m_gi.is_none() {
m_gi =
ig.0.git_ignore_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
}
if any_git && !saw_git && m_gi_exclude.is_none() {
m_gi_exclude =
ig.0.git_exclude_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
}
saw_git = saw_git || ig.0.has_git;
}
if m_ignore.is_none() {
m_ignore =
ig.0.ignore_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
}
if !saw_git && m_gi.is_none() {
m_gi =
ig.0.git_ignore_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
}
if !saw_git && m_gi_exclude.is_none() {
m_gi_exclude =
ig.0.git_exclude_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
}
saw_git = saw_git || ig.0.has_git;
}
}
for gi in self.0.explicit_ignores.iter().rev() {
@@ -385,8 +404,14 @@ impl Ignore {
}
m_explicit = gi.matched(&path, is_dir).map(IgnoreMatch::gitignore);
}
let m_global = self.0.git_global_matcher.matched(&path, is_dir)
.map(IgnoreMatch::gitignore);
let m_global =
if any_git {
self.0.git_global_matcher
.matched(&path, is_dir)
.map(IgnoreMatch::gitignore)
} else {
Match::None
};
m_custom_ignore.or(m_ignore).or(m_gi).or(m_gi_exclude).or(m_global).or(m_explicit)
}
@@ -454,6 +479,7 @@ impl IgnoreBuilder {
opts: IgnoreOptions {
hidden: true,
ignore: true,
parents: true,
git_global: true,
git_ignore: true,
git_exclude: true,
@@ -556,6 +582,17 @@ impl IgnoreBuilder {
self
}
/// Enables reading ignore files from parent directories.
///
/// If this is enabled, then .gitignore files in parent directories of each
/// file path given are respected. Otherwise, they are ignored.
///
/// This is enabled by default.
pub fn parents(&mut self, yes: bool) -> &mut IgnoreBuilder {
self.opts.parents = yes;
self
}
/// Add a global gitignore matcher.
///
/// Its precedence is lower than both normal `.gitignore` files and
@@ -677,6 +714,7 @@ mod tests {
#[test]
fn gitignore() {
let td = TempDir::new("ignore-test-").unwrap();
mkdirp(td.path().join(".git"));
wfile(td.path().join(".gitignore"), "foo\n!bar");
let (ig, err) = IgnoreBuilder::new().build().add_child(td.path());
@@ -686,6 +724,18 @@ mod tests {
assert!(ig.matched("baz", false).is_none());
}
#[test]
fn gitignore_no_git() {
let td = TempDir::new("ignore-test-").unwrap();
wfile(td.path().join(".gitignore"), "foo\n!bar");
let (ig, err) = IgnoreBuilder::new().build().add_child(td.path());
assert!(err.is_none());
assert!(ig.matched("foo", false).is_none());
assert!(ig.matched("bar", false).is_none());
assert!(ig.matched("baz", false).is_none());
}
#[test]
fn ignore() {
let td = TempDir::new("ignore-test-").unwrap();
@@ -795,6 +845,7 @@ mod tests {
#[test]
fn errored_partial() {
let td = TempDir::new("ignore-test-").unwrap();
mkdirp(td.path().join(".git"));
wfile(td.path().join(".gitignore"), "f**oo\nbar");
let (ig, err) = IgnoreBuilder::new().build().add_child(td.path());

View File

@@ -83,7 +83,7 @@ pub struct Gitignore {
globs: Vec<Glob>,
num_ignores: u64,
num_whitelists: u64,
matches: Arc<ThreadLocal<RefCell<Vec<usize>>>>,
matches: Option<Arc<ThreadLocal<RefCell<Vec<usize>>>>>,
}
impl Gitignore {
@@ -143,7 +143,14 @@ impl Gitignore {
///
/// Its path is empty.
pub fn empty() -> Gitignore {
GitignoreBuilder::new("").build().unwrap()
Gitignore {
set: GlobSet::empty(),
root: PathBuf::from(""),
globs: vec![],
num_ignores: 0,
num_whitelists: 0,
matches: None,
}
}
/// Returns the directory containing this gitignore matcher.
@@ -213,6 +220,11 @@ impl Gitignore {
/// determined by a common suffix of the directory containing this
/// gitignore) is stripped. If there is no common suffix/prefix overlap,
/// then `path` is assumed to be relative to this matcher.
///
/// # Panics
///
/// This method panics if the given file path is not under the root path
/// of this matcher.
pub fn matched_path_or_any_parents<P: AsRef<Path>>(
&self,
path: P,
@@ -222,10 +234,8 @@ impl Gitignore {
return Match::None;
}
let mut path = self.strip(path.as_ref());
debug_assert!(
!path.has_root(),
"path is expect to be under the root"
);
assert!(!path.has_root(), "path is expected to be under the root");
match self.matched_stripped(path, is_dir) {
Match::None => (), // walk up
a_match => return a_match,
@@ -249,7 +259,7 @@ impl Gitignore {
return Match::None;
}
let path = path.as_ref();
let _matches = self.matches.get_default();
let _matches = self.matches.as_ref().unwrap().get_default();
let mut matches = _matches.borrow_mut();
let candidate = Candidate::new(path);
self.set.matches_candidate_into(&candidate, &mut *matches);
@@ -301,6 +311,7 @@ impl Gitignore {
}
/// Builds a matcher for a single set of globs from a .gitignore file.
#[derive(Clone, Debug)]
pub struct GitignoreBuilder {
builder: GlobSetBuilder,
root: PathBuf,
@@ -344,7 +355,7 @@ impl GitignoreBuilder {
globs: self.globs.clone(),
num_ignores: nignore as u64,
num_whitelists: nwhite as u64,
matches: Arc::new(ThreadLocal::default()),
matches: Some(Arc::new(ThreadLocal::default())),
})
}
@@ -477,6 +488,7 @@ impl GitignoreBuilder {
GlobBuilder::new(&glob.actual)
.literal_separator(literal_separator)
.case_insensitive(self.case_insensitive)
.backslash_escape(true)
.build()
.map_err(|err| {
Error::Glob {
@@ -491,6 +503,8 @@ impl GitignoreBuilder {
/// Toggle whether the globs should be matched case insensitively or not.
///
/// When this option is changed, only globs added after the change will be affected.
///
/// This is disabled by default.
pub fn case_insensitive(
&mut self, yes: bool
@@ -504,16 +518,27 @@ impl GitignoreBuilder {
///
/// Note that the file path returned may not exist.
fn gitconfig_excludes_path() -> Option<PathBuf> {
gitconfig_contents()
.and_then(|data| parse_excludes_file(&data))
.or_else(excludes_file_default)
// git supports $HOME/.gitconfig and $XDG_CONFIG_DIR/git/config. Notably,
// both can be active at the same time, where $HOME/.gitconfig takes
// precedent. So if $HOME/.gitconfig defines a `core.excludesFile`, then
// we're done.
match gitconfig_home_contents().and_then(|x| parse_excludes_file(&x)) {
Some(path) => return Some(path),
None => {}
}
match gitconfig_xdg_contents().and_then(|x| parse_excludes_file(&x)) {
Some(path) => return Some(path),
None => {}
}
excludes_file_default()
}
/// Returns the file contents of git's global config file, if one exists.
fn gitconfig_contents() -> Option<Vec<u8>> {
let home = match env::var_os("HOME") {
/// Returns the file contents of git's global config file, if one exists, in
/// the user's home directory.
fn gitconfig_home_contents() -> Option<Vec<u8>> {
let home = match home_dir() {
None => return None,
Some(home) => PathBuf::from(home),
Some(home) => home,
};
let mut file = match File::open(home.join(".gitconfig")) {
Err(_) => return None,
@@ -523,13 +548,28 @@ fn gitconfig_contents() -> Option<Vec<u8>> {
file.read_to_end(&mut contents).ok().map(|_| contents)
}
/// Returns the file contents of git's global config file, if one exists, in
/// the user's XDG_CONFIG_DIR directory.
fn gitconfig_xdg_contents() -> Option<Vec<u8>> {
let path = env::var_os("XDG_CONFIG_HOME")
.and_then(|x| if x.is_empty() { None } else { Some(PathBuf::from(x)) })
.or_else(|| home_dir().map(|p| p.join(".config")))
.map(|x| x.join("git/config"));
let mut file = match path.and_then(|p| File::open(p).ok()) {
None => return None,
Some(file) => io::BufReader::new(file),
};
let mut contents = vec![];
file.read_to_end(&mut contents).ok().map(|_| contents)
}
/// Returns the default file path for a global .gitignore file.
///
/// Specifically, this respects XDG_CONFIG_HOME.
fn excludes_file_default() -> Option<PathBuf> {
env::var_os("XDG_CONFIG_HOME")
.and_then(|x| if x.is_empty() { None } else { Some(PathBuf::from(x)) })
.or_else(|| env::home_dir().map(|p| p.join(".config")))
.or_else(|| home_dir().map(|p| p.join(".config")))
.map(|x| x.join("git/ignore"))
}
@@ -541,7 +581,8 @@ fn parse_excludes_file(data: &[u8]) -> Option<PathBuf> {
// a full INI parser. Yuck.
lazy_static! {
static ref RE: Regex = Regex::new(
r"(?ium)^\s*excludesfile\s*=\s*(.+)\s*$").unwrap();
r"(?im)^\s*excludesfile\s*=\s*(.+)\s*$"
).unwrap();
};
let caps = match RE.captures(data) {
None => return None,
@@ -552,13 +593,22 @@ fn parse_excludes_file(data: &[u8]) -> Option<PathBuf> {
/// Expands ~ in file paths to the value of $HOME.
fn expand_tilde(path: &str) -> String {
let home = match env::var("HOME") {
Err(_) => return path.to_string(),
Ok(home) => home,
let home = match home_dir() {
None => return path.to_string(),
Some(home) => home.to_string_lossy().into_owned(),
};
path.replace("~", &home)
}
/// Returns the location of the user's home directory.
fn home_dir() -> Option<PathBuf> {
// We're fine with using env::home_dir for now. Its bugs are, IMO, pretty
// minor corner cases. We should still probably eventually migrate to
// the `dirs` crate to get a proper implementation.
#![allow(deprecated)]
env::home_dir()
}
#[cfg(test)]
mod tests {
use std::path::Path;
@@ -635,6 +685,10 @@ mod tests {
ignored!(ig35, "./.", ".a/b", ".a/b");
ignored!(ig36, "././", ".a/b", ".a/b");
ignored!(ig37, "././.", ".a/b", ".a/b");
ignored!(ig38, ROOT, "\\[", "[");
ignored!(ig39, ROOT, "\\?", "?");
ignored!(ig40, ROOT, "\\*", "*");
ignored!(ig41, ROOT, "\\a", "a");
not_ignored!(ignot1, ROOT, "amonths", "months");
not_ignored!(ignot2, ROOT, "monthsa", "months");

View File

@@ -132,6 +132,44 @@ pub enum Error {
InvalidDefinition,
}
impl Clone for Error {
fn clone(&self) -> Error {
match *self {
Error::Partial(ref errs) => Error::Partial(errs.clone()),
Error::WithLineNumber { line, ref err } => {
Error::WithLineNumber { line: line, err: err.clone() }
}
Error::WithPath { ref path, ref err } => {
Error::WithPath { path: path.clone(), err: err.clone() }
}
Error::WithDepth { depth, ref err } => {
Error::WithDepth { depth: depth, err: err.clone() }
}
Error::Loop { ref ancestor, ref child } => {
Error::Loop {
ancestor: ancestor.clone(),
child: child.clone()
}
}
Error::Io(ref err) => {
match err.raw_os_error() {
Some(e) => Error::Io(io::Error::from_raw_os_error(e)),
None => {
Error::Io(io::Error::new(err.kind(), err.to_string()))
}
}
}
Error::Glob { ref glob, ref err } => {
Error::Glob { glob: glob.clone(), err: err.clone() }
}
Error::UnrecognizedFileType(ref err) => {
Error::UnrecognizedFileType(err.clone())
}
Error::InvalidDefinition => Error::InvalidDefinition,
}
}
}
impl Error {
/// Returns true if this is a partial error.
///

View File

@@ -140,6 +140,8 @@ impl OverrideBuilder {
/// Toggle whether the globs should be matched case insensitively or not.
///
/// When this option is changed, only globs added after the change will be affected.
///
/// This is disabled by default.
pub fn case_insensitive(
&mut self, yes: bool

View File

@@ -98,10 +98,13 @@ use {Error, Match};
const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("agda", &["*.agda", "*.lagda"]),
("aidl", &["*.aidl"]),
("amake", &["*.mk", "*.bp"]),
("asciidoc", &["*.adoc", "*.asc", "*.asciidoc"]),
("asm", &["*.asm", "*.s", "*.S"]),
("avro", &["*.avdl", "*.avpr", "*.avsc"]),
("awk", &["*.awk"]),
("bazel", &["*.bzl", "WORKSPACE", "BUILD"]),
("bitbake", &["*.bb", "*.bbappend", "*.bbclass", "*.conf", "*.inc"]),
("bzip2", &["*.bz2"]),
("c", &["*.c", "*.h", "*.H"]),
@@ -131,6 +134,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("elixir", &["*.ex", "*.eex", "*.exs"]),
("elm", &["*.elm"]),
("erlang", &["*.erl", "*.hrl"]),
("fidl", &["*.fidl"]),
("fish", &["*.fish"]),
("fortran", &[
"*.f", "*.F", "*.f77", "*.F77", "*.pfo",
@@ -144,8 +148,9 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("h", &["*.h", "*.hpp"]),
("hbs", &["*.hbs"]),
("haskell", &["*.hs", "*.lhs"]),
("hs", &["*.hs", "*.lhs"]),
("html", &["*.htm", "*.html", "*.ejs"]),
("java", &["*.java"]),
("java", &["*.java", "*.jsp"]),
("jinja", &["*.j2", "*.jinja", "*.jinja2"]),
("js", &[
"*.js", "*.jsx", "*.vue",
@@ -188,6 +193,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("log", &["*.log"]),
("lua", &["*.lua"]),
("lzma", &["*.lzma"]),
("lz4", &["*.lz4"]),
("m4", &["*.ac", "*.m4"]),
("make", &[
"gnumakefile", "Gnumakefile", "GNUmakefile",
@@ -215,6 +221,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("pod", &["*.pod"]),
("protobuf", &["*.proto"]),
("ps", &["*.cdxml", "*.ps1", "*.ps1xml", "*.psd1", "*.psm1"]),
("puppet", &["*.erb", "*.pp", "*.rb"]),
("purs", &["*.purs"]),
("py", &["*.py"]),
("qmake", &["*.pro", "*.pri", "*.prf"]),
@@ -275,6 +282,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("twig", &["*.twig"]),
("vala", &["*.vala"]),
("vb", &["*.vb"]),
("verilog", &["*.v", "*.vh", "*.sv", "*.svh"]),
("vhdl", &["*.vhd", "*.vhdl"]),
("vim", &["*.vim"]),
("vimscript", &["*.vim"]),

View File

@@ -24,7 +24,7 @@ use {Error, PartialErrorBuilder};
///
/// The error typically refers to a problem parsing ignore files in a
/// particular directory.
#[derive(Debug)]
#[derive(Clone, Debug)]
pub struct DirEntry {
dent: DirEntryInner,
err: Option<Error>,
@@ -126,7 +126,7 @@ impl DirEntry {
///
/// Specifically, (3) has to essentially re-create the DirEntry implementation
/// from WalkDir.
#[derive(Debug)]
#[derive(Clone, Debug)]
enum DirEntryInner {
Stdin,
Walkdir(walkdir::DirEntry),
@@ -235,6 +235,7 @@ impl DirEntryInner {
/// DirEntryRaw is essentially copied from the walkdir crate so that we can
/// build `DirEntry`s from whole cloth in the parallel iterator.
#[derive(Clone)]
struct DirEntryRaw {
/// The path as reported by the `fs::ReadDir` iterator (even if it's a
/// symbolic link).
@@ -456,7 +457,6 @@ impl DirEntryRaw {
pub struct WalkBuilder {
paths: Vec<PathBuf>,
ig_builder: IgnoreBuilder,
parents: bool,
max_depth: Option<usize>,
max_filesize: Option<u64>,
follow_links: bool,
@@ -471,7 +471,6 @@ impl fmt::Debug for WalkBuilder {
f.debug_struct("WalkBuilder")
.field("paths", &self.paths)
.field("ig_builder", &self.ig_builder)
.field("parents", &self.parents)
.field("max_depth", &self.max_depth)
.field("max_filesize", &self.max_filesize)
.field("follow_links", &self.follow_links)
@@ -491,7 +490,6 @@ impl WalkBuilder {
WalkBuilder {
paths: vec![path.as_ref().to_path_buf()],
ig_builder: IgnoreBuilder::new(),
parents: true,
max_depth: None,
max_filesize: None,
follow_links: false,
@@ -530,7 +528,6 @@ impl WalkBuilder {
ig_root: ig_root.clone(),
ig: ig_root.clone(),
max_filesize: self.max_filesize,
parents: self.parents,
}
}
@@ -546,7 +543,6 @@ impl WalkBuilder {
max_depth: self.max_depth,
max_filesize: self.max_filesize,
follow_links: self.follow_links,
parents: self.parents,
threads: self.threads,
}
}
@@ -678,14 +674,12 @@ impl WalkBuilder {
/// Enables reading ignore files from parent directories.
///
/// If this is enabled, then the parent directories of each file path given
/// are traversed for ignore files (subject to the ignore settings on
/// this builder). Note that file paths are canonicalized with respect to
/// the current working directory in order to determine parent directories.
/// If this is enabled, then .gitignore files in parent directories of each
/// file path given are respected. Otherwise, they are ignored.
///
/// This is enabled by default.
pub fn parents(&mut self, yes: bool) -> &mut WalkBuilder {
self.parents = yes;
self.ig_builder.parents(yes);
self
}
@@ -764,7 +758,6 @@ pub struct Walk {
ig_root: Ignore,
ig: Ignore,
max_filesize: Option<u64>,
parents: bool,
}
impl Walk {
@@ -811,7 +804,7 @@ impl Iterator for Walk {
}
Some((path, Some(it))) => {
self.it = Some(it);
if self.parents && path_is_dir(&path) {
if path_is_dir(&path) {
let (ig, err) = self.ig_root.add_parents(path);
self.ig = ig;
if let Some(err) = err {
@@ -947,7 +940,6 @@ impl WalkState {
pub struct WalkParallel {
paths: vec::IntoIter<PathBuf>,
ig_root: Ignore,
parents: bool,
max_filesize: Option<u64>,
max_depth: Option<usize>,
follow_links: bool,
@@ -1009,7 +1001,6 @@ impl WalkParallel {
num_waiting: num_waiting.clone(),
num_quitting: num_quitting.clone(),
threads: threads,
parents: self.parents,
max_depth: self.max_depth,
max_filesize: self.max_filesize,
follow_links: self.follow_links,
@@ -1125,9 +1116,6 @@ struct Worker {
num_quitting: Arc<AtomicUsize>,
/// The total number of workers.
threads: usize,
/// Whether to create ignore matchers for parents of caller specified
/// directories.
parents: bool,
/// The maximum depth of directories to descend. A value of `0` means no
/// descension at all.
max_depth: Option<usize>,
@@ -1155,12 +1143,10 @@ impl Worker {
}
continue;
}
if self.parents {
if let Some(err) = work.add_parents() {
if (self.f)(Err(err)).is_quit() {
self.quit_now();
return;
}
if let Some(err) = work.add_parents() {
if (self.f)(Err(err)).is_quit() {
self.quit_now();
return;
}
}
let readdir = match work.read_dir() {
@@ -1644,6 +1630,7 @@ mod tests {
#[test]
fn gitignore() {
let td = TempDir::new("walk-test-").unwrap();
mkdirp(td.path().join(".git"));
mkdirp(td.path().join("a"));
wfile(td.path().join(".gitignore"), "foo");
wfile(td.path().join("foo"), "");
@@ -1672,9 +1659,28 @@ mod tests {
assert_paths(td.path(), &builder, &["bar", "a", "a/bar"]);
}
#[test]
fn explicit_ignore_exclusive_use() {
let td = TempDir::new("walk-test-").unwrap();
let igpath = td.path().join(".not-an-ignore");
mkdirp(td.path().join("a"));
wfile(&igpath, "foo");
wfile(td.path().join("foo"), "");
wfile(td.path().join("a/foo"), "");
wfile(td.path().join("bar"), "");
wfile(td.path().join("a/bar"), "");
let mut builder = WalkBuilder::new(td.path());
builder.standard_filters(false);
assert!(builder.add_ignore(&igpath).is_none());
assert_paths(td.path(), &builder,
&[".not-an-ignore", "bar", "a", "a/bar"]);
}
#[test]
fn gitignore_parent() {
let td = TempDir::new("walk-test-").unwrap();
mkdirp(td.path().join(".git"));
mkdirp(td.path().join("a"));
wfile(td.path().join(".gitignore"), "foo");
wfile(td.path().join("a/foo"), "");

View File

@@ -1,13 +1,11 @@
extern crate ignore;
use std::path::Path;
use ignore::gitignore::{Gitignore, GitignoreBuilder};
const IGNORE_FILE: &'static str = "tests/gitignore_matched_path_or_any_parents_tests.gitignore";
const IGNORE_FILE: &'static str =
"tests/gitignore_matched_path_or_any_parents_tests.gitignore";
fn get_gitignore() -> Gitignore {
let mut builder = GitignoreBuilder::new("ROOT");
@@ -16,9 +14,8 @@ fn get_gitignore() -> Gitignore {
builder.build().unwrap()
}
#[test]
#[should_panic(expected = "path is expect to be under the root")]
#[should_panic(expected = "path is expected to be under the root")]
fn test_path_should_be_under_root() {
let gitignore = get_gitignore();
let path = "/tmp/some_file";
@@ -26,11 +23,12 @@ fn test_path_should_be_under_root() {
assert!(false);
}
#[test]
fn test_files_in_root() {
let gitignore = get_gitignore();
let m = |path: &str| gitignore.matched_path_or_any_parents(Path::new(path), false);
let m = |path: &str| {
gitignore.matched_path_or_any_parents(Path::new(path), false)
};
// 0x
assert!(m("ROOT/file_root_00").is_ignore());
@@ -61,7 +59,9 @@ fn test_files_in_root() {
#[test]
fn test_files_in_deep() {
let gitignore = get_gitignore();
let m = |path: &str| gitignore.matched_path_or_any_parents(Path::new(path), false);
let m = |path: &str| {
gitignore.matched_path_or_any_parents(Path::new(path), false)
};
// 0x
assert!(m("ROOT/parent_dir/file_deep_00").is_ignore());
@@ -92,8 +92,9 @@ fn test_files_in_deep() {
#[test]
fn test_dirs_in_root() {
let gitignore = get_gitignore();
let m =
|path: &str, is_dir: bool| gitignore.matched_path_or_any_parents(Path::new(path), is_dir);
let m = |path: &str, is_dir: bool| {
gitignore.matched_path_or_any_parents(Path::new(path), is_dir)
};
// 00
assert!(m("ROOT/dir_root_00", true).is_ignore());
@@ -196,20 +197,25 @@ fn test_dirs_in_root() {
#[test]
fn test_dirs_in_deep() {
let gitignore = get_gitignore();
let m =
|path: &str, is_dir: bool| gitignore.matched_path_or_any_parents(Path::new(path), is_dir);
let m = |path: &str, is_dir: bool| {
gitignore.matched_path_or_any_parents(Path::new(path), is_dir)
};
// 00
assert!(m("ROOT/parent_dir/dir_deep_00", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_00/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_00/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_00/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_00/child_dir/file", false).is_ignore()
);
// 01
assert!(m("ROOT/parent_dir/dir_deep_01", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_01/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_01/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_01/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_01/child_dir/file", false).is_ignore()
);
// 02
assert!(m("ROOT/parent_dir/dir_deep_02", true).is_none());
@@ -251,47 +257,67 @@ fn test_dirs_in_deep() {
assert!(m("ROOT/parent_dir/dir_deep_20", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_20/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_20/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_20/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_20/child_dir/file", false).is_ignore()
);
// 21
assert!(m("ROOT/parent_dir/dir_deep_21", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_21/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_21/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_21/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_21/child_dir/file", false).is_ignore()
);
// 22
assert!(m("ROOT/parent_dir/dir_deep_22", true).is_none()); // dir itself doesn't match
// dir itself doesn't match
assert!(m("ROOT/parent_dir/dir_deep_22", true).is_none());
assert!(m("ROOT/parent_dir/dir_deep_22/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_22/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_22/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_22/child_dir/file", false).is_ignore()
);
// 23
assert!(m("ROOT/parent_dir/dir_deep_23", true).is_none()); // dir itself doesn't match
// dir itself doesn't match
assert!(m("ROOT/parent_dir/dir_deep_23", true).is_none());
assert!(m("ROOT/parent_dir/dir_deep_23/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_23/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_23/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_23/child_dir/file", false).is_ignore()
);
// 30
assert!(m("ROOT/parent_dir/dir_deep_30", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_30/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_30/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_30/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_30/child_dir/file", false).is_ignore()
);
// 31
assert!(m("ROOT/parent_dir/dir_deep_31", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_31/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_31/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_31/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_31/child_dir/file", false).is_ignore()
);
// 32
assert!(m("ROOT/parent_dir/dir_deep_32", true).is_none()); // dir itself doesn't match
// dir itself doesn't match
assert!(m("ROOT/parent_dir/dir_deep_32", true).is_none());
assert!(m("ROOT/parent_dir/dir_deep_32/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_32/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_32/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_32/child_dir/file", false).is_ignore()
);
// 33
assert!(m("ROOT/parent_dir/dir_deep_33", true).is_none()); // dir itself doesn't match
// dir itself doesn't match
assert!(m("ROOT/parent_dir/dir_deep_33", true).is_none());
assert!(m("ROOT/parent_dir/dir_deep_33/file", false).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_33/child_dir", true).is_ignore());
assert!(m("ROOT/parent_dir/dir_deep_33/child_dir/file", false).is_ignore());
assert!(
m("ROOT/parent_dir/dir_deep_33/child_dir/file", false).is_ignore()
);
}

View File

@@ -1,14 +1,14 @@
class RipgrepBin < Formula
version '0.8.0'
version '0.8.1'
desc "Recursively search directories for a regex pattern."
homepage "https://github.com/BurntSushi/ripgrep"
if OS.mac?
url "https://github.com/BurntSushi/ripgrep/releases/download/#{version}/ripgrep-#{version}-x86_64-apple-darwin.tar.gz"
sha256 "fb35e92fd57d28a1e68daf964764c3da7f027ad30cca7a07a1848224776f36b2"
sha256 "71f8d2907b473e5fc30159b822b0f1de247634ee292d5cc3fa1bb80222e0f613"
elsif OS.linux?
url "https://github.com/BurntSushi/ripgrep/releases/download/#{version}/ripgrep-#{version}-x86_64-unknown-linux-musl.tar.gz"
sha256 "621f2f16f65203aa37e7c10ecfb22384c4c01e70ebbd30a13c0d6944ffc6e59e"
sha256 "08b1aa1440a23a88c94cff41a860340ecf38e9108817aff30ff778c00c63eb76"
end
conflicts_with "ripgrep"

View File

@@ -1,5 +1,5 @@
name: ripgrep # you probably want to 'snapcraft register <name>'
version: '0.5.1' # just for humans, typically '1.2+git' or '1.3.2'
version: '0.8.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Fast file searcher # 79 char long summary
description: |
ripgrep combines the usability of The Silver Searcher with the raw speed of grep.
@@ -11,5 +11,5 @@ parts:
source: .
apps:
rg:
command: env PATH=$SNAP/bin:$PATH rg
aliases: [rg]
adapter: none
command: ./bin/rg

View File

@@ -31,9 +31,10 @@ Use -h for short descriptions and --help for more details.";
const USAGE: &str = "
rg [OPTIONS] PATTERN [PATH ...]
rg [OPTIONS] [-e PATTERN ...] [-f FILE ...] [PATH ...]
rg [OPTIONS] [-e PATTERN ...] [-f PATTERNFILE ...] [PATH ...]
rg [OPTIONS] --files [PATH ...]
rg [OPTIONS] --type-list";
rg [OPTIONS] --type-list
command | rg [OPTIONS] PATTERN";
const TEMPLATE: &str = "\
{bin} {version}
@@ -475,23 +476,6 @@ impl RGArg {
});
self
}
/// Indicate that any value given to this argument should be a valid
/// line number width. A valid line number width cannot start with `0`
/// to maintain compatibility with future improvements that add support
/// for padding character specifies.
fn line_number_width(mut self) -> RGArg {
self.claparg = self.claparg.validator(|val| {
if val.starts_with("0") {
Err(String::from(
"Custom padding characters are currently not supported. \
Please enter only a numeric value."))
} else {
val.parse::<usize>().map(|_| ()).map_err(|err| err.to_string())
}
});
self
}
}
// We add an extra space to long descriptions so that a black line is inserted
@@ -509,6 +493,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
// Flags can be defined in any order, but we do it alphabetically.
flag_after_context(&mut args);
flag_before_context(&mut args);
flag_byte_offset(&mut args);
flag_case_sensitive(&mut args);
flag_color(&mut args);
flag_colors(&mut args);
@@ -516,6 +501,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_context(&mut args);
flag_context_separator(&mut args);
flag_count(&mut args);
flag_count_matches(&mut args);
flag_debug(&mut args);
flag_dfa_size_limit(&mut args);
flag_encoding(&mut args);
@@ -533,15 +519,16 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_ignore_file(&mut args);
flag_invert_match(&mut args);
flag_line_number(&mut args);
flag_line_number_width(&mut args);
flag_line_regexp(&mut args);
flag_max_columns(&mut args);
flag_max_count(&mut args);
flag_max_depth(&mut args);
flag_max_filesize(&mut args);
flag_maxdepth(&mut args);
flag_mmap(&mut args);
flag_no_config(&mut args);
flag_no_ignore(&mut args);
flag_no_ignore_global(&mut args);
flag_no_ignore_messages(&mut args);
flag_no_ignore_parent(&mut args);
flag_no_ignore_vcs(&mut args);
flag_no_messages(&mut args);
@@ -549,6 +536,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_only_matching(&mut args);
flag_path_separator(&mut args);
flag_passthru(&mut args);
flag_pre(&mut args);
flag_pretty(&mut args);
flag_quiet(&mut args);
flag_regex_size_limit(&mut args);
@@ -557,6 +545,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_search_zip(&mut args);
flag_smart_case(&mut args);
flag_sort_files(&mut args);
flag_stats(&mut args);
flag_text(&mut args);
flag_threads(&mut args);
flag_type(&mut args);
@@ -634,6 +623,19 @@ This overrides the --context flag.
args.push(arg);
}
fn flag_byte_offset(args: &mut Vec<RGArg>) {
const SHORT: &str =
"Print the 0-based byte offset for each matching line.";
const LONG: &str = long!("\
Print the 0-based byte offset within the input file
before each line of output. If -o (--only-matching) is
specified, print the offset of the matching part itself.
");
let arg = RGArg::switch("byte-offset").short("b")
.help(SHORT).long_help(LONG);
args.push(arg);
}
fn flag_case_sensitive(args: &mut Vec<RGArg>) {
const SHORT: &str = "Search case sensitively (default).";
const LONG: &str = long!("\
@@ -724,9 +726,17 @@ fn flag_column(args: &mut Vec<RGArg>) {
Show column numbers (1-based). This only shows the column numbers for the first
match on each line. This does not try to account for Unicode. One byte is equal
to one column. This implies --line-number.
This flag can be disabled with --no-column.
");
let arg = RGArg::switch("column")
.help(SHORT).long_help(LONG);
.help(SHORT).long_help(LONG)
.overrides("no-column");
args.push(arg);
let arg = RGArg::switch("no-column")
.hidden()
.overrides("column");
args.push(arg);
}
@@ -758,7 +768,7 @@ sequences like \\x7F or \\t may be used. The default value is --.
}
fn flag_count(args: &mut Vec<RGArg>) {
const SHORT: &str = "Only show the count of matches for each file.";
const SHORT: &str = "Only show the count of matching lines for each file.";
const LONG: &str = long!("\
This flag suppresses normal output and shows the number of lines that match
the given patterns for each file searched. Each file containing a match has its
@@ -768,9 +778,34 @@ that match and not the total number of matches.
If only one file is given to ripgrep, then only the count is printed if there
is a match. The --with-filename flag can be used to force printing the file
path in this case.
This overrides the --count-matches flag. Note that when --count is combined
with --only-matching, then ripgrep behaves as if --count-matches was given.
");
let arg = RGArg::switch("count").short("c")
.help(SHORT).long_help(LONG);
.help(SHORT).long_help(LONG).overrides("count-matches");
args.push(arg);
}
fn flag_count_matches(args: &mut Vec<RGArg>) {
const SHORT: &str =
"Only show the count of individual matches for each file.";
const LONG: &str = long!("\
This flag suppresses normal output and shows the number of individual
matches of the given patterns for each file searched. Each file
containing matches has its path and match count printed on each line.
Note that this reports the total number of individual matches and not
the number of lines that match.
If only one file is given to ripgrep, then only the count is printed if there
is a match. The --with-filename flag can be used to force printing the file
path in this case.
This overrides the --count flag. Note that when --count is combined with
--only-matching, then ripgrep behaves as if --count-matches was given.
");
let arg = RGArg::switch("count-matches")
.help(SHORT).long_help(LONG).overrides("count");
args.push(arg);
}
@@ -823,7 +858,7 @@ input lines, and the newline is not counted as part of the pattern.
A line is printed if and only if it matches at least one of the patterns.
");
let arg = RGArg::flag("file", "PATH").short("f")
let arg = RGArg::flag("file", "PATTERNFILE").short("f")
.help(SHORT).long_help(LONG)
.multiple()
.allow_leading_hyphen();
@@ -877,9 +912,17 @@ fn flag_fixed_strings(args: &mut Vec<RGArg>) {
Treat the pattern as a literal string instead of a regular expression. When
this flag is used, special regular expression meta characters such as .(){}*+
do not need to be escaped.
This flag can be disabled with --no-fixed-strings.
");
let arg = RGArg::switch("fixed-strings").short("F")
.help(SHORT).long_help(LONG);
.help(SHORT).long_help(LONG)
.overrides("no-fixed-strings");
args.push(arg);
let arg = RGArg::switch("no-fixed-strings")
.hidden()
.overrides("fixed-strings");
args.push(arg);
}
@@ -1002,10 +1045,10 @@ fn flag_ignore_file(args: &mut Vec<RGArg>) {
const SHORT: &str = "Specify additional ignore files.";
const LONG: &str = long!("\
Specifies a path to one or more .gitignore format rules files. These patterns
are applied after the patterns found in .gitignore and .ignore are applied
and are matched relative to the current working directory. Multiple additional
ignore files can be specified by using the --ignore-file flag several times.
When specifying multiple ignore files, earlier files have lower precedence
are applied after the patterns found in .gitignore and .ignore are applied
and are matched relative to the current working directory. Multiple additional
ignore files can be specified by using the --ignore-file flag several times.
When specifying multiple ignore files, earlier files have lower precedence
than later files.
If you are looking for a way to include or exclude files and directories
@@ -1050,18 +1093,6 @@ terminal.
args.push(arg);
}
fn flag_line_number_width(args: &mut Vec<RGArg>) {
const SHORT: &str = "Left pad line numbers up to NUM width.";
const LONG: &str = long!("\
Left pad line numbers up to NUM width. Space is used as the default padding
character. This has no effect if --no-line-number is enabled.
");
let arg = RGArg::flag("line-number-width", "NUM")
.help(SHORT).long_help(LONG)
.line_number_width();
args.push(arg);
}
fn flag_line_regexp(args: &mut Vec<RGArg>) {
const SHORT: &str = "Only show matches surrounded by line boundaries.";
const LONG: &str = long!("\
@@ -1102,6 +1133,23 @@ Limit the number of matching lines per file searched to NUM.
args.push(arg);
}
fn flag_max_depth(args: &mut Vec<RGArg>) {
const SHORT: &str = "Descend at most NUM directories.";
const LONG: &str = long!("\
Limit the depth of directory traversal to NUM levels beyond the paths given. A
value of zero only searches the explicitly given paths themselves.
For example, 'rg --max-depth 0 dir/' is a no-op because dir/ will not be
descended into. 'rg --max-depth 1 dir/' will search only the direct children of
'dir'.
");
let arg = RGArg::flag("max-depth", "NUM")
.help(SHORT).long_help(LONG)
.alias("maxdepth")
.number();
args.push(arg);
}
fn flag_max_filesize(args: &mut Vec<RGArg>) {
const SHORT: &str = "Ignore files larger than NUM in size.";
const LONG: &str = long!("\
@@ -1118,22 +1166,6 @@ Examples: --max-filesize 50K or --max-filesize 80M
args.push(arg);
}
fn flag_maxdepth(args: &mut Vec<RGArg>) {
const SHORT: &str = "Descend at most NUM directories.";
const LONG: &str = long!("\
Limit the depth of directory traversal to NUM levels beyond the paths given. A
value of zero only searches the explicitly given paths themselves.
For example, 'rg --maxdepth 0 dir/' is a no-op because dir/ will not be
descended into. 'rg --maxdepth 1 dir/' will search only the direct children of
'dir'.
");
let arg = RGArg::flag("maxdepth", "NUM")
.help(SHORT).long_help(LONG)
.number();
args.push(arg);
}
fn flag_mmap(args: &mut Vec<RGArg>) {
const SHORT: &str = "Search using memory maps when possible.";
const LONG: &str = long!("\
@@ -1199,6 +1231,45 @@ This flag can be disabled with the --ignore flag.
args.push(arg);
}
fn flag_no_ignore_global(args: &mut Vec<RGArg>) {
const SHORT: &str = "Don't respect global ignore files.";
const LONG: &str = long!("\
Don't respect ignore files that come from \"global\" sources such as git's
`core.excludesFile` configuration option (which defaults to
`$HOME/.config/git/ignore).
This flag can be disabled with the --ignore-global flag.
");
let arg = RGArg::switch("no-ignore-global")
.help(SHORT).long_help(LONG)
.overrides("ignore-global");
args.push(arg);
let arg = RGArg::switch("ignore-global")
.hidden()
.overrides("no-ignore-global");
args.push(arg);
}
fn flag_no_ignore_messages(args: &mut Vec<RGArg>) {
const SHORT: &str = "Suppress gitignore parse error messages.";
const LONG: &str = long!("\
Suppresses all error messages related to parsing ignore files such as .ignore
or .gitignore.
This flag can be disabled with the --ignore-messages flag.
");
let arg = RGArg::switch("no-ignore-messages")
.help(SHORT).long_help(LONG)
.overrides("ignore-messages");
args.push(arg);
let arg = RGArg::switch("ignore-messages")
.hidden()
.overrides("no-ignore-messages");
args.push(arg);
}
fn flag_no_ignore_parent(args: &mut Vec<RGArg>) {
const SHORT: &str = "Don't respect ignore files in parent directories.";
const LONG: &str = long!("\
@@ -1238,10 +1309,10 @@ This flag can be disabled with the --ignore-vcs flag.
}
fn flag_no_messages(args: &mut Vec<RGArg>) {
const SHORT: &str = "Suppress all error messages.";
const SHORT: &str = "Suppress some error messages.";
const LONG: &str = long!("\
Suppress all error messages. This provides the same behavior as redirecting
stderr to /dev/null on Unix-like systems.
Suppress all error messages related to opening and reading files. Error
messages related to the syntax of the pattern given are still shown.
This flag can be disabled with the --messages flag.
");
@@ -1331,6 +1402,9 @@ fn flag_quiet(args: &mut Vec<RGArg>) {
Do not print anything to stdout. If a match is found in a file, then ripgrep
will stop searching. This is useful when ripgrep is used only for its exit
code (which will be an error if no matches are found).
When --files is used, then ripgrep will stop finding files after finding the
first file that matches all ignore rules.
");
let arg = RGArg::switch("quiet").short("q")
.help(SHORT).long_help(LONG);
@@ -1397,7 +1471,7 @@ This flag can be used with the -o/--only-matching flag.
fn flag_search_zip(args: &mut Vec<RGArg>) {
const SHORT: &str = "Search in compressed files.";
const LONG: &str = long!("\
Search in compressed files. Currently gz, bz2, xz, and lzma files are
Search in compressed files. Currently gz, bz2, xz, lzma and lz4 files are
supported. This option expects the decompression binaries to be available in
your PATH.
@@ -1405,7 +1479,8 @@ This flag can be disabled with --no-search-zip.
");
let arg = RGArg::switch("search-zip").short("z")
.help(SHORT).long_help(LONG)
.overrides("no-search-zip");
.overrides("no-search-zip")
.overrides("pre");
args.push(arg);
let arg = RGArg::switch("no-search-zip")
@@ -1414,6 +1489,64 @@ This flag can be disabled with --no-search-zip.
args.push(arg);
}
fn flag_pre(args: &mut Vec<RGArg>) {
const SHORT: &str = "search outputs of COMMAND FILE for each FILE";
const LONG: &str = long!("\
For each input FILE, search the standard output of COMMAND FILE rather than the
contents of FILE. This option expects the COMMAND program to either be an
absolute path or to be available in your PATH. Either an empty string COMMAND
or the `--no-pre` flag will disable this behavior.
WARNING: When this flag is set, ripgrep will unconditionally spawn a
process for every file that is searched. Therefore, this can incur an
unnecessarily large performance penalty if you don't otherwise need the
flexibility offered by this flag.
A preprocessor is not run when ripgrep is searching stdin.
When searching over sets of files that may require one of several decoders
as preprocessors, COMMAND should be a wrapper program or script which first
classifies FILE based on magic numbers/content or based on the FILE name and
then dispatches to an appropriate preprocessor. Each COMMAND also has its
standard input connected to FILE for convenience.
For example, a shell script for COMMAND might look like:
case \"$1\" in
*.pdf)
exec pdftotext \"$1\" -
;;
*)
case $(file \"$1\") in
*Zstandard*)
exec pzstd -cdq
;;
*)
exec cat
;;
esac
;;
esac
The above script uses `pdftotext` to convert a PDF file to plain text. For
all other files, the script uses the `file` utility to sniff the type of the
file based on its contents. If it is a compressed file in the Zstandard format,
then `pzstd` is used to decompress the contents to stdout.
This overrides the -z/--search-zip flag.
");
let arg = RGArg::flag("pre", "COMMAND")
.help(SHORT).long_help(LONG)
.overrides("no-pre")
.overrides("search-zip");
args.push(arg);
let arg = RGArg::switch("no-pre")
.hidden()
.overrides("pre");
args.push(arg);
}
fn flag_smart_case(args: &mut Vec<RGArg>) {
const SHORT: &str = "Smart case search.";
const LONG: &str = long!("\
@@ -1448,6 +1581,25 @@ This flag can be disabled with --no-sort-files.
args.push(arg);
}
fn flag_stats(args: &mut Vec<RGArg>) {
const SHORT: &str = "Print statistics about this ripgrep search.";
const LONG: &str = long!("\
Print aggregate statistics about this ripgrep search. When this flag is
present, ripgrep will print the following stats to stdout at the end of the
search: number of matched lines, number of files with matches, number of files
searched, and the time taken for the entire search to complete.
This set of aggregate statistics may expand over time.
Note that this flag has no effect if --files, --files-with-matches or
--files-without-match is passed.");
let arg = RGArg::switch("stats")
.help(SHORT).long_help(LONG);
args.push(arg);
}
fn flag_text(args: &mut Vec<RGArg>) {
const SHORT: &str = "Search binary files as if they were text.";
const LONG: &str = long!("\

View File

@@ -9,7 +9,7 @@ use std::sync::atomic::{AtomicBool, Ordering};
use clap;
use encoding_rs::Encoding;
use grep::{Grep, GrepBuilder, Error as GrepError};
use grep::{Grep, GrepBuilder};
use log;
use num_cpus;
use regex;
@@ -22,7 +22,7 @@ use ignore::overrides::{Override, OverrideBuilder};
use ignore::types::{FileTypeDef, Types, TypesBuilder};
use ignore;
use printer::{ColorSpecs, Printer};
use unescape::unescape;
use unescape::{escape, unescape};
use worker::{Worker, WorkerBuilder};
use config;
@@ -35,11 +35,14 @@ pub struct Args {
paths: Vec<PathBuf>,
after_context: usize,
before_context: usize,
byte_offset: bool,
can_match: bool,
color_choice: termcolor::ColorChoice,
colors: ColorSpecs,
column: bool,
context_separator: Vec<u8>,
count: bool,
count_matches: bool,
encoding: Option<&'static Encoding>,
files_with_matches: bool,
files_without_matches: bool,
@@ -54,13 +57,14 @@ pub struct Args {
invert_match: bool,
line_number: bool,
line_per_match: bool,
line_number_width: Option<usize>,
max_columns: Option<usize>,
max_count: Option<u64>,
max_depth: Option<usize>,
max_filesize: Option<u64>,
maxdepth: Option<usize>,
mmap: bool,
no_ignore: bool,
no_ignore_global: bool,
no_ignore_messages: bool,
no_ignore_parent: bool,
no_ignore_vcs: bool,
no_messages: bool,
@@ -77,7 +81,9 @@ pub struct Args {
type_list: bool,
types: Types,
with_filename: bool,
search_zip_files: bool
search_zip_files: bool,
preprocessor: Option<PathBuf>,
stats: bool
}
impl Args {
@@ -187,8 +193,7 @@ impl Args {
.only_matching(self.only_matching)
.path_separator(self.path_separator)
.with_filename(self.with_filename)
.max_columns(self.max_columns)
.line_number_width(self.line_number_width);
.max_columns(self.max_columns);
if let Some(ref rep) = self.replace {
p = p.replace(rep.clone());
}
@@ -199,6 +204,7 @@ impl Args {
pub fn file_separator(&self) -> Option<Vec<u8>> {
let contextless =
self.count
|| self.count_matches
|| self.files_with_matches
|| self.files_without_matches;
let use_heading_sep = self.heading && !contextless;
@@ -215,12 +221,22 @@ impl Args {
/// Returns true if the given arguments are known to never produce a match.
pub fn never_match(&self) -> bool {
self.max_count == Some(0)
!self.can_match || self.max_count == Some(0)
}
/// Returns whether ripgrep should track stats for this run
pub fn stats(&self) -> bool {
self.stats
}
/// Create a new writer for single-threaded searching with color support.
pub fn stdout(&self) -> termcolor::StandardStream {
termcolor::StandardStream::stdout(self.color_choice)
pub fn stdout(&self) -> Box<termcolor::WriteColor> {
if atty::is(atty::Stream::Stdout) {
Box::new(termcolor::StandardStream::stdout(self.color_choice))
} else {
Box::new(
termcolor::BufferedStandardStream::stdout(self.color_choice))
}
}
/// Returns a handle to stdout for filtering search.
@@ -259,7 +275,9 @@ impl Args {
WorkerBuilder::new(self.grep())
.after_context(self.after_context)
.before_context(self.before_context)
.byte_offset(self.byte_offset)
.count(self.count)
.count_matches(self.count_matches)
.encoding(self.encoding)
.files_with_matches(self.files_with_matches)
.files_without_matches(self.files_without_matches)
@@ -272,6 +290,7 @@ impl Args {
.quiet(self.quiet)
.text(self.text)
.search_zip_files(self.search_zip_files)
.preprocessor(self.preprocessor.clone())
.build()
}
@@ -296,6 +315,12 @@ impl Args {
self.no_messages
}
/// Returns true if error messages associated with parsing .ignore or
/// .gitignore files should be suppressed.
pub fn no_ignore_messages(&self) -> bool {
self.no_ignore_messages
}
/// Create a new recursive directory iterator over the paths in argv.
pub fn walker(&self) -> ignore::Walk {
self.walker_builder().build()
@@ -315,7 +340,7 @@ impl Args {
}
for path in &self.ignore_files {
if let Some(err) = wd.add_ignore(path) {
if !self.no_messages {
if !self.no_messages && !self.no_ignore_messages {
eprintln!("{}", err);
}
}
@@ -323,11 +348,13 @@ impl Args {
wd.follow_links(self.follow);
wd.hidden(!self.hidden);
wd.max_depth(self.maxdepth);
wd.max_depth(self.max_depth);
wd.max_filesize(self.max_filesize);
wd.overrides(self.glob_overrides.clone());
wd.types(self.types.clone());
wd.git_global(!self.no_ignore && !self.no_ignore_vcs);
wd.git_global(
!self.no_ignore && !self.no_ignore_vcs && !self.no_ignore_global
);
wd.git_ignore(!self.no_ignore && !self.no_ignore_vcs);
wd.git_exclude(!self.no_ignore && !self.no_ignore_vcs);
wd.ignore(!self.no_ignore);
@@ -356,16 +383,21 @@ impl<'a> ArgMatches<'a> {
let mmap = self.mmap(&paths)?;
let with_filename = self.with_filename(&paths);
let (before_context, after_context) = self.contexts()?;
let (count, count_matches) = self.counts();
let quiet = self.is_present("quiet");
let (grep, can_match) = self.grep()?;
let args = Args {
paths: paths,
after_context: after_context,
before_context: before_context,
byte_offset: self.is_present("byte-offset"),
can_match: can_match,
color_choice: self.color_choice(),
colors: self.color_specs()?,
column: self.column(),
context_separator: self.context_separator(),
count: self.is_present("count"),
count: count,
count_matches: count_matches,
encoding: self.encoding()?,
files_with_matches: self.is_present("files-with-matches"),
files_without_matches: self.is_present("files-without-match"),
@@ -373,20 +405,21 @@ impl<'a> ArgMatches<'a> {
files: self.is_present("files"),
follow: self.is_present("follow"),
glob_overrides: self.overrides()?,
grep: self.grep()?,
grep: grep,
heading: self.heading(),
hidden: self.hidden(),
ignore_files: self.ignore_files(),
invert_match: self.is_present("invert-match"),
line_number: line_number,
line_number_width: try!(self.usize_of("line-number-width")),
line_per_match: self.is_present("vimgrep"),
max_columns: self.usize_of_nonzero("max-columns")?,
max_count: self.usize_of("max-count")?.map(|n| n as u64),
max_depth: self.usize_of("max-depth")?,
max_filesize: self.max_filesize()?,
maxdepth: self.usize_of("maxdepth")?,
mmap: mmap,
no_ignore: self.no_ignore(),
no_ignore_global: self.no_ignore_global(),
no_ignore_messages: self.is_present("no-ignore-messages"),
no_ignore_parent: self.no_ignore_parent(),
no_ignore_vcs: self.no_ignore_vcs(),
no_messages: self.is_present("no-messages"),
@@ -403,7 +436,9 @@ impl<'a> ArgMatches<'a> {
type_list: self.is_present("type-list"),
types: self.types()?,
with_filename: with_filename,
search_zip_files: self.is_present("search-zip")
search_zip_files: self.is_present("search-zip"),
preprocessor: self.preprocessor(),
stats: self.stats()
};
if args.mmap {
debug!("will try to use memory maps");
@@ -458,17 +493,6 @@ impl<'a> ArgMatches<'a> {
}
}
/// Return the pattern that should be used for searching.
///
/// If multiple -e/--regexp flags are given, then they are all collapsed
/// into one pattern.
///
/// If any part of the pattern isn't valid UTF-8, then an error is
/// returned.
fn pattern(&self) -> Result<String> {
Ok(self.patterns()?.join("|"))
}
/// Get a sequence of all available patterns from the command line.
/// This includes reading the -e/--regexp and -f/--file flags.
///
@@ -518,8 +542,6 @@ impl<'a> ArgMatches<'a> {
// match first, and we wouldn't get colours in the output
if self.is_present("passthru") && !self.is_present("count") {
pats.push("^".to_string())
} else if pats.is_empty() {
pats.push(self.empty_pattern())
}
Ok(pats)
}
@@ -583,7 +605,7 @@ impl<'a> ArgMatches<'a> {
// This would normally just be an empty string, which works on its
// own, but if the patterns are joined in a set of alternations, then
// you wind up with `foo|`, which is invalid.
self.word_pattern("z{0}".to_string())
self.word_pattern("(?:z{0})*".to_string())
}
/// Returns true if and only if file names containing each match should
@@ -665,6 +687,9 @@ impl<'a> ArgMatches<'a> {
/// Returns true if and only if column numbers should be shown.
fn column(&self) -> bool {
if self.is_present("no-column") {
return false;
}
self.is_present("column") || self.is_present("vimgrep")
}
@@ -693,6 +718,19 @@ impl<'a> ArgMatches<'a> {
}
}
/// Returns the preprocessor command
fn preprocessor(&self) -> Option<PathBuf> {
if let Some(path) = self.value_of_os("pre") {
if path.is_empty() {
None
} else {
Some(Path::new(path).to_path_buf())
}
} else {
None
}
}
/// Returns the unescaped path separator in UTF-8 bytes.
fn path_separator(&self) -> Result<Option<u8>> {
match self.value_of_lossy("path-separator") {
@@ -704,7 +742,12 @@ impl<'a> ArgMatches<'a> {
} else if sep.len() > 1 {
Err(From::from(format!(
"A path separator must be exactly one byte, but \
the given separator is {} bytes.", sep.len())))
the given separator is {} bytes: {}\n\
In some shells on Windows '/' is automatically \
expanded. Use '//' instead.",
sep.len(),
escape(&sep),
)))
} else {
Ok(Some(sep[0]))
}
@@ -729,6 +772,28 @@ impl<'a> ArgMatches<'a> {
})
}
/// Returns whether the -c/--count or the --count-matches flags were
/// passed from the command line.
///
/// If --count-matches and --invert-match were passed in, behave
/// as if --count and --invert-match were passed in (i.e. rg will
/// count inverted matches as per existing behavior).
fn counts(&self) -> (bool, bool) {
let count = self.is_present("count");
let count_matches = self.is_present("count-matches");
let invert_matches = self.is_present("invert-match");
let only_matching = self.is_present("only-matching");
if count_matches && invert_matches {
// Treat `-v --count-matches` as `-v -c`.
(true, false)
} else if count && only_matching {
// Treat `-c --only-matching` as `--count-matches`.
(false, true)
} else {
(count, count_matches)
}
}
/// Returns the user's color choice based on command line parameters and
/// environment.
fn color_choice(&self) -> termcolor::ColorChoice {
@@ -795,6 +860,19 @@ impl<'a> ArgMatches<'a> {
}
}
/// Returns whether status should be tracked for this run of ripgrep
/// This is automatically disabled if we're asked to only list the
/// files that wil be searched, files with matches or files
/// without matches.
fn stats(&self) -> bool {
if self.is_present("files-with-matches") ||
self.is_present("files-without-match") {
return false;
}
self.is_present("stats")
}
/// Returns the approximate number of threads that ripgrep should use.
fn threads(&self) -> Result<usize> {
if self.is_present("sort-files") {
@@ -812,7 +890,10 @@ impl<'a> ArgMatches<'a> {
///
/// If there was a problem extracting the pattern from the command line
/// flags, then an error is returned.
fn grep(&self) -> Result<Grep> {
///
/// If no match can ever occur, then `false` is returned. Otherwise,
/// `true` is returned.
fn grep(&self) -> Result<(Grep, bool)> {
let smart =
self.is_present("smart-case")
&& !self.is_present("ignore-case")
@@ -820,7 +901,9 @@ impl<'a> ArgMatches<'a> {
let casei =
self.is_present("ignore-case")
&& !self.is_present("case-sensitive");
let mut gb = GrepBuilder::new(&self.pattern()?)
let pats = self.patterns()?;
let ok = !pats.is_empty();
let mut gb = GrepBuilder::new(&pats.join("|"))
.case_smart(smart)
.case_insensitive(casei)
.line_terminator(b'\n');
@@ -831,16 +914,7 @@ impl<'a> ArgMatches<'a> {
if let Some(limit) = self.regex_size_limit()? {
gb = gb.size_limit(limit);
}
gb.build().map_err(|err| {
match err {
GrepError::Regex(err) => {
let s = format!("{}\n(Hint: Try the --fixed-strings flag \
to search for a literal string.)", err.to_string());
From::from(s)
},
err => From::from(err)
}
})
Ok((gb.build()?, ok))
}
/// Builds the set of glob overrides from the command line flags.
@@ -943,6 +1017,11 @@ impl<'a> ArgMatches<'a> {
|| self.occurrences_of("unrestricted") >= 1
}
/// Returns true if global ignore files should be ignored.
fn no_ignore_global(&self) -> bool {
self.is_present("no-ignore-global") || self.no_ignore()
}
/// Returns true if parent ignore files should be ignored.
fn no_ignore_parent(&self) -> bool {
self.is_present("no-ignore-parent") || self.no_ignore()

View File

@@ -1,456 +0,0 @@
use std::cmp;
use std::io::{self, Read};
use encoding_rs::{Decoder, Encoding, UTF_8};
/// A BOM is at least 2 bytes and at most 3 bytes.
///
/// If fewer than 2 bytes are available to be read at the beginning of a
/// reader, then a BOM is `None`.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
struct Bom {
bytes: [u8; 3],
len: usize,
}
impl Bom {
fn as_slice(&self) -> &[u8] {
&self.bytes[0..self.len]
}
fn decoder(&self) -> Option<Decoder> {
let bom = self.as_slice();
if bom.len() < 3 {
return None;
}
if let Some((enc, _)) = Encoding::for_bom(bom) {
if enc != UTF_8 {
return Some(enc.new_decoder_with_bom_removal());
}
}
None
}
}
/// `BomPeeker` wraps `R` and satisfies the `io::Read` interface while also
/// providing a peek at the BOM if one exists. Peeking at the BOM does not
/// advance the reader.
struct BomPeeker<R> {
rdr: R,
bom: Option<Bom>,
nread: usize,
}
impl<R: io::Read> BomPeeker<R> {
/// Create a new BomPeeker.
///
/// The first three bytes can be read using the `peek_bom` method, but
/// will not advance the reader.
fn new(rdr: R) -> BomPeeker<R> {
BomPeeker { rdr: rdr, bom: None, nread: 0 }
}
/// Peek at the first three bytes of the underlying reader.
///
/// This does not advance the reader provided by `BomPeeker`.
///
/// If the underlying reader does not have at least two bytes available,
/// then `None` is returned.
fn peek_bom(&mut self) -> io::Result<Bom> {
if let Some(bom) = self.bom {
return Ok(bom);
}
self.bom = Some(Bom { bytes: [0; 3], len: 0 });
let mut buf = [0u8; 3];
let bom_len = read_full(&mut self.rdr, &mut buf)?;
self.bom = Some(Bom { bytes: buf, len: bom_len });
Ok(self.bom.unwrap())
}
}
impl<R: io::Read> io::Read for BomPeeker<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
if self.nread < 3 {
let bom = self.peek_bom()?;
let bom = bom.as_slice();
if self.nread < bom.len() {
let rest = &bom[self.nread..];
let len = cmp::min(buf.len(), rest.len());
buf[..len].copy_from_slice(&rest[..len]);
self.nread += len;
return Ok(len);
}
}
let nread = self.rdr.read(buf)?;
self.nread += nread;
Ok(nread)
}
}
/// Like `io::Read::read_exact`, except it never returns `UnexpectedEof` and
/// instead returns the number of bytes read if EOF is seen before filling
/// `buf`.
fn read_full<R: io::Read>(
mut rdr: R,
mut buf: &mut [u8],
) -> io::Result<usize> {
let mut nread = 0;
while !buf.is_empty() {
match rdr.read(buf) {
Ok(0) => break,
Ok(n) => {
nread += n;
let tmp = buf;
buf = &mut tmp[n..];
}
Err(ref e) if e.kind() == io::ErrorKind::Interrupted => {}
Err(e) => return Err(e),
}
}
Ok(nread)
}
/// A reader that transcodes to UTF-8. The source encoding is determined by
/// inspecting the BOM from the stream read from `R`, if one exists. If a
/// UTF-16 BOM exists, then the source stream is transcoded to UTF-8 with
/// invalid UTF-16 sequences translated to the Unicode replacement character.
/// In all other cases, the underlying reader is passed through unchanged.
///
/// `R` is the type of the underlying reader and `B` is the type of an internal
/// buffer used to store the results of transcoding.
///
/// Note that not all methods on `io::Read` work with this implementation.
/// For example, the `bytes` adapter method attempts to read a single byte at
/// a time, but this implementation requires a buffer of size at least `4`. If
/// a buffer of size less than 4 is given, then an error is returned.
pub struct DecodeReader<R, B> {
/// The underlying reader, wrapped in a peeker for reading a BOM if one
/// exists.
rdr: BomPeeker<R>,
/// The internal buffer to store transcoded bytes before they are read by
/// callers.
buf: B,
/// The current position in `buf`. Subsequent reads start here.
pos: usize,
/// The number of transcoded bytes in `buf`. Subsequent reads end here.
buflen: usize,
/// Whether this is the first read or not (in which we inspect the BOM).
first: bool,
/// Whether a "last" read has occurred. After this point, EOF will always
/// be returned.
last: bool,
/// The underlying text decoder derived from the BOM, if one exists.
decoder: Option<Decoder>,
}
impl<R: io::Read, B: AsMut<[u8]>> DecodeReader<R, B> {
/// Create a new transcoder that converts a source stream to valid UTF-8.
///
/// If an encoding is specified, then it is used to transcode `rdr` to
/// UTF-8. Otherwise, if no encoding is specified, and if a UTF-16 BOM is
/// found, then the corresponding UTF-16 encoding is used to transcode
/// `rdr` to UTF-8. In all other cases, `rdr` is assumed to be at least
/// ASCII-compatible and passed through untouched.
///
/// Errors in the encoding of `rdr` are handled with the Unicode
/// replacement character. If no encoding of `rdr` is specified, then
/// errors are not handled.
pub fn new(
rdr: R,
buf: B,
enc: Option<&'static Encoding>,
) -> DecodeReader<R, B> {
DecodeReader {
rdr: BomPeeker::new(rdr),
buf: buf,
buflen: 0,
pos: 0,
first: enc.is_none(),
last: false,
decoder: enc.map(|enc| enc.new_decoder_with_bom_removal()),
}
}
/// Fill the internal buffer from the underlying reader.
///
/// If there are unread bytes in the internal buffer, then we move them
/// to the beginning of the internal buffer and fill the remainder.
///
/// If the internal buffer is too small to read additional bytes, then an
/// error is returned.
#[inline(always)] // massive perf benefit (???)
fn fill(&mut self) -> io::Result<()> {
if self.pos < self.buflen {
if self.buflen >= self.buf.as_mut().len() {
return Err(io::Error::new(
io::ErrorKind::Other,
"DecodeReader: internal buffer exhausted"));
}
let newlen = self.buflen - self.pos;
let mut tmp = Vec::with_capacity(newlen);
tmp.extend_from_slice(&self.buf.as_mut()[self.pos..self.buflen]);
self.buf.as_mut()[..newlen].copy_from_slice(&tmp);
self.buflen = newlen;
} else {
self.buflen = 0;
}
self.pos = 0;
self.buflen +=
self.rdr.read(&mut self.buf.as_mut()[self.buflen..])?;
Ok(())
}
/// Transcode the inner stream to UTF-8 in `buf`. This assumes that there
/// is a decoder capable of transcoding the inner stream to UTF-8. This
/// returns the number of bytes written to `buf`.
///
/// When this function returns, exactly one of the following things will
/// be true:
///
/// 1. A non-zero number of bytes were written to `buf`.
/// 2. The underlying reader reached EOF.
/// 3. An error is returned: the internal buffer ran out of room.
/// 4. An I/O error occurred.
///
/// Note that `buf` must have at least 4 bytes of space.
fn transcode(&mut self, buf: &mut [u8]) -> io::Result<usize> {
assert!(buf.len() >= 4);
if self.last {
return Ok(0);
}
if self.pos >= self.buflen {
self.fill()?;
}
let mut nwrite = 0;
loop {
let (_, nin, nout, _) =
self.decoder.as_mut().unwrap().decode_to_utf8(
&self.buf.as_mut()[self.pos..self.buflen], buf, false);
self.pos += nin;
nwrite += nout;
// If we've written at least one byte to the caller-provided
// buffer, then our mission is complete.
if nwrite > 0 {
break;
}
// Otherwise, we know that our internal buffer has insufficient
// data to transcode at least one char, so we attempt to refill it.
self.fill()?;
// Quit on EOF.
if self.buflen == 0 {
self.pos = 0;
self.last = true;
let (_, _, nout, _) =
self.decoder.as_mut().unwrap().decode_to_utf8(
&[], buf, true);
return Ok(nout);
}
}
Ok(nwrite)
}
#[inline(never)] // impacts perf...
fn detect(&mut self) -> io::Result<()> {
let bom = self.rdr.peek_bom()?;
self.decoder = bom.decoder();
Ok(())
}
}
impl<R: io::Read, B: AsMut<[u8]>> io::Read for DecodeReader<R, B> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
if self.first {
self.first = false;
self.detect()?;
}
if self.decoder.is_none() {
return self.rdr.read(buf);
}
// When decoding UTF-8, we need at least 4 bytes of space to guarantee
// that we can decode at least one codepoint. If we don't have it, we
// can either return `0` for the number of bytes read or return an
// error. Since `0` would be interpreted as a possibly premature EOF,
// we opt for an error.
if buf.len() < 4 {
return Err(io::Error::new(
io::ErrorKind::Other,
"DecodeReader: byte buffer must have length at least 4"));
}
self.transcode(buf)
}
}
#[cfg(test)]
mod tests {
use std::io::Read;
use encoding_rs::Encoding;
use super::{Bom, BomPeeker, DecodeReader};
fn read_to_string<R: Read>(mut rdr: R) -> String {
let mut s = String::new();
rdr.read_to_string(&mut s).unwrap();
s
}
#[test]
fn peeker_empty() {
let buf = [];
let mut peeker = BomPeeker::new(&buf[..]);
assert_eq!(Bom { bytes: [0; 3], len: 0}, peeker.peek_bom().unwrap());
let mut tmp = [0; 100];
assert_eq!(0, peeker.read(&mut tmp).unwrap());
}
#[test]
fn peeker_one() {
let buf = [1];
let mut peeker = BomPeeker::new(&buf[..]);
assert_eq!(
Bom { bytes: [1, 0, 0], len: 1},
peeker.peek_bom().unwrap());
let mut tmp = [0; 100];
assert_eq!(1, peeker.read(&mut tmp).unwrap());
assert_eq!(1, tmp[0]);
assert_eq!(0, peeker.read(&mut tmp).unwrap());
}
#[test]
fn peeker_two() {
let buf = [1, 2];
let mut peeker = BomPeeker::new(&buf[..]);
assert_eq!(
Bom { bytes: [1, 2, 0], len: 2},
peeker.peek_bom().unwrap());
let mut tmp = [0; 100];
assert_eq!(2, peeker.read(&mut tmp).unwrap());
assert_eq!(1, tmp[0]);
assert_eq!(2, tmp[1]);
assert_eq!(0, peeker.read(&mut tmp).unwrap());
}
#[test]
fn peeker_three() {
let buf = [1, 2, 3];
let mut peeker = BomPeeker::new(&buf[..]);
assert_eq!(
Bom { bytes: [1, 2, 3], len: 3},
peeker.peek_bom().unwrap());
let mut tmp = [0; 100];
assert_eq!(3, peeker.read(&mut tmp).unwrap());
assert_eq!(1, tmp[0]);
assert_eq!(2, tmp[1]);
assert_eq!(3, tmp[2]);
assert_eq!(0, peeker.read(&mut tmp).unwrap());
}
#[test]
fn peeker_four() {
let buf = [1, 2, 3, 4];
let mut peeker = BomPeeker::new(&buf[..]);
assert_eq!(
Bom { bytes: [1, 2, 3], len: 3},
peeker.peek_bom().unwrap());
let mut tmp = [0; 100];
assert_eq!(3, peeker.read(&mut tmp).unwrap());
assert_eq!(1, tmp[0]);
assert_eq!(2, tmp[1]);
assert_eq!(3, tmp[2]);
assert_eq!(1, peeker.read(&mut tmp).unwrap());
assert_eq!(4, tmp[0]);
assert_eq!(0, peeker.read(&mut tmp).unwrap());
}
#[test]
fn peeker_one_at_a_time() {
let buf = [1, 2, 3, 4];
let mut peeker = BomPeeker::new(&buf[..]);
let mut tmp = [0; 1];
assert_eq!(0, peeker.read(&mut tmp[..0]).unwrap());
assert_eq!(0, tmp[0]);
assert_eq!(1, peeker.read(&mut tmp).unwrap());
assert_eq!(1, tmp[0]);
assert_eq!(1, peeker.read(&mut tmp).unwrap());
assert_eq!(2, tmp[0]);
assert_eq!(1, peeker.read(&mut tmp).unwrap());
assert_eq!(3, tmp[0]);
assert_eq!(1, peeker.read(&mut tmp).unwrap());
assert_eq!(4, tmp[0]);
}
// In cases where all we have is a bom, we expect the bytes to be
// passed through unchanged.
#[test]
fn trans_utf16_bom() {
let srcbuf = vec![0xFF, 0xFE];
let mut dstbuf = vec![0; 8 * (1<<10)];
let mut rdr = DecodeReader::new(&*srcbuf, vec![0; 8 * (1<<10)], None);
let n = rdr.read(&mut dstbuf).unwrap();
assert_eq!(&*srcbuf, &dstbuf[..n]);
let srcbuf = vec![0xFE, 0xFF];
let mut rdr = DecodeReader::new(&*srcbuf, vec![0; 8 * (1<<10)], None);
let n = rdr.read(&mut dstbuf).unwrap();
assert_eq!(&*srcbuf, &dstbuf[..n]);
let srcbuf = vec![0xEF, 0xBB, 0xBF];
let mut rdr = DecodeReader::new(&*srcbuf, vec![0; 8 * (1<<10)], None);
let n = rdr.read(&mut dstbuf).unwrap();
assert_eq!(&*srcbuf, &dstbuf[..n]);
}
// Test basic UTF-16 decoding.
#[test]
fn trans_utf16_basic() {
let srcbuf = vec![0xFF, 0xFE, 0x61, 0x00];
let mut rdr = DecodeReader::new(&*srcbuf, vec![0; 8 * (1<<10)], None);
assert_eq!("a", read_to_string(&mut rdr));
let srcbuf = vec![0xFE, 0xFF, 0x00, 0x61];
let mut rdr = DecodeReader::new(&*srcbuf, vec![0; 8 * (1<<10)], None);
assert_eq!("a", read_to_string(&mut rdr));
}
// Test incomplete UTF-16 decoding. This ensures we see a replacement char
// if the stream ends with an unpaired code unit.
#[test]
fn trans_utf16_incomplete() {
let srcbuf = vec![0xFF, 0xFE, 0x61, 0x00, 0x00];
let mut rdr = DecodeReader::new(&*srcbuf, vec![0; 8 * (1<<10)], None);
assert_eq!("a\u{FFFD}", read_to_string(&mut rdr));
}
macro_rules! test_trans_simple {
($name:ident, $enc:expr, $srcbytes:expr, $dst:expr) => {
#[test]
fn $name() {
let srcbuf = &$srcbytes[..];
let enc = Encoding::for_label($enc.as_bytes());
let mut rdr = DecodeReader::new(
&*srcbuf, vec![0; 8 * (1<<10)], enc);
assert_eq!($dst, read_to_string(&mut rdr));
}
}
}
// This isn't exhaustive obviously, but it lets us test base level support.
test_trans_simple!(trans_simple_auto, "does not exist", b"\xD0\x96", "Ж");
test_trans_simple!(trans_simple_utf8, "utf-8", b"\xD0\x96", "Ж");
test_trans_simple!(trans_simple_utf16le, "utf-16le", b"\x16\x04", "Ж");
test_trans_simple!(trans_simple_utf16be, "utf-16be", b"\x04\x16", "Ж");
test_trans_simple!(trans_simple_chinese, "chinese", b"\xA7\xA8", "Ж");
test_trans_simple!(trans_simple_korean, "korean", b"\xAC\xA8", "Ж");
test_trans_simple!(
trans_simple_big5_hkscs, "big5-hkscs", b"\xC7\xFA", "Ж");
test_trans_simple!(trans_simple_gbk, "gbk", b"\xA7\xA8", "Ж");
test_trans_simple!(trans_simple_sjis, "sjis", b"\x84\x47", "Ж");
test_trans_simple!(trans_simple_eucjp, "euc-jp", b"\xA7\xA8", "Ж");
test_trans_simple!(trans_simple_latin1, "latin1", b"\xA9", "©");
}

View File

@@ -44,6 +44,7 @@ lazy_static! {
m.insert("gz", DecompressionCommand::new("gzip", ARGS));
m.insert("bz2", DecompressionCommand::new("bzip2", ARGS));
m.insert("xz", DecompressionCommand::new("xz", ARGS));
m.insert("lz4", DecompressionCommand::new("lz4", ARGS));
const LZMA_ARGS: &[&str] = &["--format=lzma", "-d", "-c"];
m.insert("lzma", DecompressionCommand::new("xz", LZMA_ARGS));
@@ -55,6 +56,7 @@ lazy_static! {
builder.add(Glob::new("*.gz").unwrap());
builder.add(Glob::new("*.bz2").unwrap());
builder.add(Glob::new("*.xz").unwrap());
builder.add(Glob::new("*.lz4").unwrap());
builder.add(Glob::new("*.lzma").unwrap());
builder.build().unwrap()
};
@@ -63,6 +65,7 @@ lazy_static! {
builder.add(Glob::new("*.tar.gz").unwrap());
builder.add(Glob::new("*.tar.xz").unwrap());
builder.add(Glob::new("*.tar.bz2").unwrap());
builder.add(Glob::new("*.tar.lz4").unwrap());
builder.add(Glob::new("*.tgz").unwrap());
builder.add(Glob::new("*.txz").unwrap());
builder.add(Glob::new("*.tbz2").unwrap());
@@ -89,10 +92,6 @@ impl DecompressionReader {
/// If there is any error in spawning the decompression command, then
/// return `None`, after outputting any necessary debug or error messages.
pub fn from_path(path: &Path) -> Option<DecompressionReader> {
if is_tar_archive(path) {
debug!("{}: skipping tar archive", path.display());
return None;
}
let extension = match path.extension().and_then(OsStr::to_str) {
Some(extension) => extension,
None => {

View File

@@ -3,6 +3,7 @@ extern crate bytecount;
#[macro_use]
extern crate clap;
extern crate encoding_rs;
extern crate encoding_rs_io;
extern crate globset;
extern crate grep;
extern crate ignore;
@@ -27,6 +28,7 @@ use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::mpsc;
use std::thread;
use std::time::{Duration, Instant};
use args::Args;
use worker::Work;
@@ -40,8 +42,8 @@ macro_rules! errored {
mod app;
mod args;
mod config;
mod decoder;
mod decompressor;
mod preprocessor;
mod logger;
mod pathutil;
mod printer;
@@ -59,7 +61,7 @@ fn main() {
Ok(_) => process::exit(0),
Err(err) => {
eprintln!("{}", err);
process::exit(1);
process::exit(2);
}
}
}
@@ -85,16 +87,19 @@ fn run(args: Arc<Args>) -> Result<u64> {
}
fn run_parallel(args: &Arc<Args>) -> Result<u64> {
let start_time = Instant::now();
let bufwtr = Arc::new(args.buffer_writer());
let quiet_matched = args.quiet_matched();
let paths_searched = Arc::new(AtomicUsize::new(0));
let match_count = Arc::new(AtomicUsize::new(0));
let match_line_count = Arc::new(AtomicUsize::new(0));
let paths_matched = Arc::new(AtomicUsize::new(0));
args.walker_parallel().run(|| {
let args = Arc::clone(args);
let quiet_matched = quiet_matched.clone();
let paths_searched = paths_searched.clone();
let match_count = match_count.clone();
let match_line_count = match_line_count.clone();
let paths_matched = paths_matched.clone();
let bufwtr = Arc::clone(&bufwtr);
let mut buf = bufwtr.buffer();
let mut worker = args.worker();
@@ -109,6 +114,7 @@ fn run_parallel(args: &Arc<Args>) -> Result<u64> {
args.stdout_handle(),
args.files(),
args.no_messages(),
args.no_ignore_messages(),
) {
None => return Continue,
Some(dent) => dent,
@@ -125,10 +131,13 @@ fn run_parallel(args: &Arc<Args>) -> Result<u64> {
} else {
worker.run(&mut printer, Work::DirEntry(dent))
};
match_count.fetch_add(count as usize, Ordering::SeqCst);
match_line_count.fetch_add(count as usize, Ordering::SeqCst);
if quiet_matched.set_match(count > 0) {
return Quit;
}
if args.stats() && count > 0 {
paths_matched.fetch_add(1, Ordering::SeqCst);
}
}
// BUG(burntsushi): We should handle this error instead of ignoring
// it. See: https://github.com/BurntSushi/ripgrep/issues/200
@@ -141,27 +150,40 @@ fn run_parallel(args: &Arc<Args>) -> Result<u64> {
eprint_nothing_searched();
}
}
Ok(match_count.load(Ordering::SeqCst) as u64)
let match_line_count = match_line_count.load(Ordering::SeqCst) as u64;
let paths_searched = paths_searched.load(Ordering::SeqCst) as u64;
let paths_matched = paths_matched.load(Ordering::SeqCst) as u64;
if args.stats() {
print_stats(
match_line_count,
paths_searched,
paths_matched,
start_time.elapsed(),
);
}
Ok(match_line_count)
}
fn run_one_thread(args: &Arc<Args>) -> Result<u64> {
let stdout = args.stdout();
let mut stdout = stdout.lock();
let start_time = Instant::now();
let mut stdout = args.stdout();
let mut worker = args.worker();
let mut paths_searched: u64 = 0;
let mut match_count = 0;
let mut match_line_count = 0;
let mut paths_matched: u64 = 0;
for result in args.walker() {
let dent = match get_or_log_dir_entry(
result,
args.stdout_handle(),
args.files(),
args.no_messages(),
args.no_ignore_messages(),
) {
None => continue,
Some(dent) => dent,
};
let mut printer = args.printer(&mut stdout);
if match_count > 0 {
if match_line_count > 0 {
if args.quiet() {
break;
}
@@ -170,27 +192,38 @@ fn run_one_thread(args: &Arc<Args>) -> Result<u64> {
}
}
paths_searched += 1;
match_count +=
let count =
if dent.is_stdin() {
worker.run(&mut printer, Work::Stdin)
} else {
worker.run(&mut printer, Work::DirEntry(dent))
};
match_line_count += count;
if args.stats() && count > 0 {
paths_matched += 1;
}
}
if !args.paths().is_empty() && paths_searched == 0 {
if !args.no_messages() {
eprint_nothing_searched();
}
}
Ok(match_count)
if args.stats() {
print_stats(
match_line_count,
paths_searched,
paths_matched,
start_time.elapsed(),
);
}
Ok(match_line_count)
}
fn run_files_parallel(args: Arc<Args>) -> Result<u64> {
let print_args = Arc::clone(&args);
let (tx, rx) = mpsc::channel::<ignore::DirEntry>();
let print_thread = thread::spawn(move || {
let stdout = print_args.stdout();
let mut printer = print_args.printer(stdout.lock());
let mut printer = print_args.printer(print_args.stdout());
let mut file_count = 0;
for dent in rx.iter() {
if !print_args.quiet() {
@@ -209,8 +242,12 @@ fn run_files_parallel(args: Arc<Args>) -> Result<u64> {
args.stdout_handle(),
args.files(),
args.no_messages(),
args.no_ignore_messages(),
) {
tx.send(dent).unwrap();
if args.quiet() {
return ignore::WalkState::Quit
}
}
ignore::WalkState::Continue
})
@@ -219,8 +256,7 @@ fn run_files_parallel(args: Arc<Args>) -> Result<u64> {
}
fn run_files_one_thread(args: &Arc<Args>) -> Result<u64> {
let stdout = args.stdout();
let mut printer = args.printer(stdout.lock());
let mut printer = args.printer(args.stdout());
let mut file_count = 0;
for result in args.walker() {
let dent = match get_or_log_dir_entry(
@@ -228,21 +264,23 @@ fn run_files_one_thread(args: &Arc<Args>) -> Result<u64> {
args.stdout_handle(),
args.files(),
args.no_messages(),
args.no_ignore_messages(),
) {
None => continue,
Some(dent) => dent,
};
if !args.quiet() {
file_count += 1;
if args.quiet() {
break;
} else {
printer.path(dent.path());
}
file_count += 1;
}
Ok(file_count)
}
fn run_types(args: &Arc<Args>) -> Result<u64> {
let stdout = args.stdout();
let mut printer = args.printer(stdout.lock());
let mut printer = args.printer(args.stdout());
let mut ty_count = 0;
for def in args.type_defs() {
printer.type_def(def);
@@ -256,6 +294,7 @@ fn get_or_log_dir_entry(
stdout_handle: Option<&same_file::Handle>,
files_only: bool,
no_messages: bool,
no_ignore_messages: bool,
) -> Option<ignore::DirEntry> {
match result {
Err(err) => {
@@ -266,7 +305,7 @@ fn get_or_log_dir_entry(
}
Ok(dent) => {
if let Some(err) = dent.error() {
if !no_messages {
if !no_messages && !no_ignore_messages {
eprintln!("{}", err);
}
}
@@ -373,6 +412,22 @@ fn eprint_nothing_searched() {
Try running again with --debug.");
}
fn print_stats(
match_count: u64,
paths_searched: u64,
paths_matched: u64,
time_elapsed: Duration,
) {
let time_elapsed =
time_elapsed.as_secs() as f64
+ (time_elapsed.subsec_nanos() as f64 * 1e-9);
println!("\n{} matched lines\n\
{} files contained matches\n\
{} files searched\n\
{:.3} seconds", match_count, paths_matched,
paths_searched, time_elapsed);
}
// The Rust standard library suppresses the default SIGPIPE behavior, so that
// writing to a closed pipe doesn't kill the process. The goal is to instead
// handle errors through the normal result mechanism. Ripgrep needs some

92
src/preprocessor.rs Normal file
View File

@@ -0,0 +1,92 @@
use std::fs::File;
use std::io::{self, Read};
use std::path::{Path, PathBuf};
use std::process::{self, Stdio};
use Result;
/// PreprocessorReader provides an `io::Read` impl to read kids output.
#[derive(Debug)]
pub struct PreprocessorReader {
cmd: PathBuf,
path: PathBuf,
child: process::Child,
done: bool,
}
impl PreprocessorReader {
/// Returns a handle to the stdout of the spawned preprocessor process for
/// `path`, which can be directly searched in the worker. When the returned
/// value is exhausted, the underlying process is reaped. If the underlying
/// process fails, then its stderr is read and converted into a normal
/// io::Error.
///
/// If there is any error in spawning the preprocessor command, then
/// return the corresponding error.
pub fn from_cmd_path(
cmd: PathBuf,
path: &Path,
) -> Result<PreprocessorReader> {
let child = process::Command::new(&cmd)
.arg(path)
.stdin(Stdio::from(File::open(path)?))
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.map_err(|err| {
format!(
"error running preprocessor command '{}': {}",
cmd.display(),
err,
)
})?;
Ok(PreprocessorReader {
cmd: cmd,
path: path.to_path_buf(),
child: child,
done: false,
})
}
fn read_error(&mut self) -> io::Result<io::Error> {
let mut errbytes = vec![];
self.child.stderr.as_mut().unwrap().read_to_end(&mut errbytes)?;
let errstr = String::from_utf8_lossy(&errbytes);
let errstr = errstr.trim();
Ok(if errstr.is_empty() {
let msg = format!(
"preprocessor command failed: '{} {}'",
self.cmd.display(),
self.path.display(),
);
io::Error::new(io::ErrorKind::Other, msg)
} else {
let msg = format!(
"preprocessor command failed: '{} {}': {}",
self.cmd.display(),
self.path.display(),
errstr,
);
io::Error::new(io::ErrorKind::Other, msg)
})
}
}
impl io::Read for PreprocessorReader {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
if self.done {
return Ok(0);
}
let nread = self.child.stdout.as_mut().unwrap().read(buf)?;
if nread == 0 {
self.done = true;
// Reap the child now that we're done reading.
// If the command failed, report stderr as an error.
if !self.child.wait()?.success() {
return Err(self.read_error()?);
}
}
Ok(nread)
}
}

View File

@@ -99,10 +99,6 @@ pub struct Printer<W> {
path_separator: Option<u8>,
/// Restrict lines to this many columns.
max_columns: Option<usize>,
/// Width of line number displayed. If the number of digits in the
/// line number is less than this, it is left padded with
/// spaces.
line_number_width: Option<usize>
}
impl<W: WriteColor> Printer<W> {
@@ -124,7 +120,6 @@ impl<W: WriteColor> Printer<W> {
colors: ColorSpecs::default(),
path_separator: None,
max_columns: None,
line_number_width: None
}
}
@@ -213,12 +208,6 @@ impl<W: WriteColor> Printer<W> {
self
}
/// Configure the width of the displayed line number
pub fn line_number_width(mut self, line_number_width: Option<usize>) -> Printer<W> {
self.line_number_width = line_number_width;
self
}
/// Returns true if and only if something has been printed.
pub fn has_printed(&self) -> bool {
self.has_printed
@@ -280,22 +269,34 @@ impl<W: WriteColor> Printer<W> {
start: usize,
end: usize,
line_number: Option<u64>,
byte_offset: Option<u64>
) {
if !self.line_per_match && !self.only_matching {
let mat = re
.find(&buf[start..end])
.map(|m| (m.start(), m.end()))
.unwrap_or((0, 0));
let mat =
if !self.needs_match() {
(0, 0)
} else {
re.find(&buf[start..end])
.map(|m| (m.start(), m.end()))
.unwrap_or((0, 0))
};
return self.write_match(
re, path, buf, start, end, line_number, mat.0, mat.1);
re, path, buf, start, end, line_number,
byte_offset, mat.0, mat.1);
}
for m in re.find_iter(&buf[start..end]) {
self.write_match(
re, path.as_ref(), buf, start, end,
line_number, m.start(), m.end());
re, path.as_ref(), buf, start, end, line_number,
byte_offset, m.start(), m.end());
}
}
fn needs_match(&self) -> bool {
self.column
|| self.replace.is_some()
|| self.only_matching
}
fn write_match<P: AsRef<Path>>(
&mut self,
re: &Regex,
@@ -304,6 +305,7 @@ impl<W: WriteColor> Printer<W> {
start: usize,
end: usize,
line_number: Option<u64>,
byte_offset: Option<u64>,
match_start: usize,
match_end: usize,
) {
@@ -321,6 +323,14 @@ impl<W: WriteColor> Printer<W> {
if self.column {
self.column_number(match_start as u64 + 1, b':');
}
if let Some(byte_offset) = byte_offset {
if self.only_matching {
self.write_byte_offset(
byte_offset + ((start + match_start) as u64), b':');
} else {
self.write_byte_offset(byte_offset + (start as u64), b':');
}
}
if self.replace.is_some() {
let mut count = 0;
let mut offsets = Vec::new();
@@ -395,6 +405,7 @@ impl<W: WriteColor> Printer<W> {
start: usize,
end: usize,
line_number: Option<u64>,
byte_offset: Option<u64>,
) {
if self.heading && self.with_filename && !self.has_printed {
self.write_file_sep();
@@ -407,6 +418,9 @@ impl<W: WriteColor> Printer<W> {
if let Some(line_number) = line_number {
self.line_number(line_number, b'-');
}
if let Some(byte_offset) = byte_offset {
self.write_byte_offset(byte_offset + (start as u64), b'-');
}
if self.max_columns.map_or(false, |m| end - start > m) {
self.write(b"[Omitted long context line]");
self.write_eol();
@@ -468,10 +482,7 @@ impl<W: WriteColor> Printer<W> {
}
fn line_number(&mut self, n: u64, sep: u8) {
let mut line_number = n.to_string();
if let Some(width) = self.line_number_width {
line_number = format!("{:>width$}", line_number, width = width);
}
let line_number = n.to_string();
self.write_colored(line_number.as_bytes(), |colors| colors.line());
self.separator(&[sep]);
}
@@ -481,6 +492,11 @@ impl<W: WriteColor> Printer<W> {
self.separator(&[sep]);
}
fn write_byte_offset(&mut self, o: u64, sep: u8) {
self.write_colored(o.to_string().as_bytes(), |colors| colors.column());
self.separator(&[sep]);
}
fn write(&mut self, buf: &[u8]) {
self.has_printed = true;
let _ = self.wtr.write_all(buf);

View File

@@ -21,8 +21,10 @@ pub struct BufferSearcher<'a, W: 'a> {
grep: &'a Grep,
path: &'a Path,
buf: &'a [u8],
match_count: u64,
match_line_count: u64,
match_count: Option<u64>,
line_count: Option<u64>,
byte_offset: Option<u64>,
last_line: usize,
}
@@ -39,12 +41,24 @@ impl<'a, W: WriteColor> BufferSearcher<'a, W> {
grep: grep,
path: path,
buf: buf,
match_count: 0,
match_line_count: 0,
match_count: None,
line_count: None,
byte_offset: None,
last_line: 0,
}
}
/// If enabled, searching will print a 0-based offset of the
/// matching line (or the actual match if -o is specified) before
/// printing the line itself.
///
/// Disabled by default.
pub fn byte_offset(mut self, yes: bool) -> Self {
self.opts.byte_offset = yes;
self
}
/// If enabled, searching will print a count instead of each match.
///
/// Disabled by default.
@@ -53,6 +67,15 @@ impl<'a, W: WriteColor> BufferSearcher<'a, W> {
self
}
/// If enabled, searching will print the count of individual matches
/// instead of each match.
///
/// Disabled by default.
pub fn count_matches(mut self, yes: bool) -> Self {
self.opts.count_matches = yes;
self
}
/// If enabled, searching will print the path instead of each match.
///
/// Disabled by default.
@@ -118,8 +141,12 @@ impl<'a, W: WriteColor> BufferSearcher<'a, W> {
return 0;
}
self.match_count = 0;
self.match_line_count = 0;
self.line_count = if self.opts.line_number { Some(0) } else { None };
// The memory map searcher uses one contiguous block of bytes, so the
// offsets given the printer are sufficient to compute the byte offset.
self.byte_offset = if self.opts.byte_offset { Some(0) } else { None };
self.match_count = if self.opts.count_matches { Some(0) } else { None };
let mut last_end = 0;
for m in self.grep.iter(self.buf) {
if self.opts.invert_match {
@@ -128,29 +155,43 @@ impl<'a, W: WriteColor> BufferSearcher<'a, W> {
self.print_match(m.start(), m.end());
}
last_end = m.end();
if self.opts.terminate(self.match_count) {
if self.opts.terminate(self.match_line_count) {
break;
}
}
if self.opts.invert_match && !self.opts.terminate(self.match_count) {
if self.opts.invert_match && !self.opts.terminate(self.match_line_count) {
let upto = self.buf.len();
self.print_inverted_matches(last_end, upto);
}
if self.opts.count && self.match_count > 0 {
self.printer.path_count(self.path, self.match_count);
if self.opts.count && self.match_line_count > 0 {
self.printer.path_count(self.path, self.match_line_count);
} else if self.opts.count_matches
&& self.match_count.map_or(false, |c| c > 0)
{
self.printer.path_count(self.path, self.match_count.unwrap());
}
if self.opts.files_with_matches && self.match_count > 0 {
if self.opts.files_with_matches && self.match_line_count > 0 {
self.printer.path(self.path);
}
if self.opts.files_without_matches && self.match_count == 0 {
if self.opts.files_without_matches && self.match_line_count == 0 {
self.printer.path(self.path);
}
self.match_count
self.match_line_count
}
#[inline(always)]
fn count_individual_matches(&mut self, start: usize, end: usize) {
if let Some(ref mut count) = self.match_count {
for _ in self.grep.regex().find_iter(&self.buf[start..end]) {
*count += 1;
}
}
}
#[inline(always)]
pub fn print_match(&mut self, start: usize, end: usize) {
self.match_count += 1;
self.match_line_count += 1;
self.count_individual_matches(start, end);
if self.opts.skip_matches() {
return;
}
@@ -158,7 +199,7 @@ impl<'a, W: WriteColor> BufferSearcher<'a, W> {
self.add_line(end);
self.printer.matched(
self.grep.regex(), self.path, self.buf,
start, end, self.line_count);
start, end, self.line_count, self.byte_offset);
}
#[inline(always)]
@@ -166,7 +207,7 @@ impl<'a, W: WriteColor> BufferSearcher<'a, W> {
debug_assert!(self.opts.invert_match);
let mut it = IterLines::new(self.opts.eol, start);
while let Some((s, e)) = it.next(&self.buf[..end]) {
if self.opts.terminate(self.match_count) {
if self.opts.terminate(self.match_line_count) {
return;
}
self.print_match(s, e);
@@ -271,6 +312,29 @@ and exhibited clearly, with a label attached.\
");
}
#[test]
fn byte_offset() {
let (_, out) = search(
"Sherlock", SHERLOCK, |s| s.byte_offset(true));
assert_eq!(out, "\
/baz.rs:0:For the Doctor Watsons of this world, as opposed to the Sherlock
/baz.rs:129:be, to a very large extent, the result of luck. Sherlock Holmes
");
}
#[test]
fn byte_offset_inverted() {
let (_, out) = search("Sherlock", SHERLOCK, |s| {
s.invert_match(true).byte_offset(true)
});
assert_eq!(out, "\
/baz.rs:65:Holmeses, success in the province of detective work must always
/baz.rs:193:can extract a clew from a wisp of straw or a flake of cigar ash;
/baz.rs:258:but Doctor Watson has to have it taken out for him and dusted,
/baz.rs:321:and exhibited clearly, with a label attached.
");
}
#[test]
fn count() {
let (count, out) = search(
@@ -279,6 +343,13 @@ and exhibited clearly, with a label attached.\
assert_eq!(out, "/baz.rs:2\n");
}
#[test]
fn count_matches() {
let (_, out) = search(
"the", SHERLOCK, |s| s.count_matches(true));
assert_eq!(out, "/baz.rs:4\n");
}
#[test]
fn files_with_matches() {
let (count, out) = search(

View File

@@ -67,8 +67,10 @@ pub struct Searcher<'a, R, W: 'a> {
grep: &'a Grep,
path: &'a Path,
haystack: R,
match_count: u64,
match_line_count: u64,
match_count: Option<u64>,
line_count: Option<u64>,
byte_offset: Option<u64>,
last_match: Match,
last_printed: usize,
last_line: usize,
@@ -80,7 +82,9 @@ pub struct Searcher<'a, R, W: 'a> {
pub struct Options {
pub after_context: usize,
pub before_context: usize,
pub byte_offset: bool,
pub count: bool,
pub count_matches: bool,
pub files_with_matches: bool,
pub files_without_matches: bool,
pub eol: u8,
@@ -96,7 +100,9 @@ impl Default for Options {
Options {
after_context: 0,
before_context: 0,
byte_offset: false,
count: false,
count_matches: false,
files_with_matches: false,
files_without_matches: false,
eol: b'\n',
@@ -111,11 +117,11 @@ impl Default for Options {
}
impl Options {
/// Several options (--quiet, --count, --files-with-matches,
/// Several options (--quiet, --count, --count-matches, --files-with-matches,
/// --files-without-match) imply that we shouldn't ever display matches.
pub fn skip_matches(&self) -> bool {
self.count || self.files_with_matches || self.files_without_matches
|| self.quiet
|| self.quiet || self.count_matches
}
/// Some options (--quiet, --files-with-matches, --files-without-match)
@@ -124,12 +130,12 @@ impl Options {
self.files_with_matches || self.files_without_matches || self.quiet
}
/// Returns true if the search should terminate based on the match count.
pub fn terminate(&self, match_count: u64) -> bool {
if match_count > 0 && self.stop_after_first_match() {
/// Returns true if the search should terminate based on the match line count.
pub fn terminate(&self, match_line_count: u64) -> bool {
if match_line_count > 0 && self.stop_after_first_match() {
return true;
}
if self.max_count.map_or(false, |max| match_count >= max) {
if self.max_count.map_or(false, |max| match_line_count >= max) {
return true;
}
false
@@ -163,8 +169,10 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
grep: grep,
path: path,
haystack: haystack,
match_count: 0,
match_line_count: 0,
match_count: None,
line_count: None,
byte_offset: None,
last_match: Match::default(),
last_printed: 0,
last_line: 0,
@@ -186,6 +194,16 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
self
}
/// If enabled, searching will print a 0-based offset of the
/// matching line (or the actual match if -o is specified) before
/// printing the line itself.
///
/// Disabled by default.
pub fn byte_offset(mut self, yes: bool) -> Self {
self.opts.byte_offset = yes;
self
}
/// If enabled, searching will print a count instead of each match.
///
/// Disabled by default.
@@ -194,6 +212,15 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
self
}
/// If enabled, searching will print the count of individual matches
/// instead of each match.
///
/// Disabled by default.
pub fn count_matches(mut self, yes: bool) -> Self {
self.opts.count_matches = yes;
self
}
/// If enabled, searching will print the path instead of each match.
///
/// Disabled by default.
@@ -257,8 +284,10 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
#[inline(never)]
pub fn run(mut self) -> Result<u64, Error> {
self.inp.reset();
self.match_count = 0;
self.match_line_count = 0;
self.line_count = if self.opts.line_number { Some(0) } else { None };
self.byte_offset = if self.opts.byte_offset { Some(0) } else { None };
self.match_count = if self.opts.count_matches { Some(0) } else { None };
self.last_match = Match::default();
self.after_context_remaining = 0;
while !self.terminate() {
@@ -308,36 +337,39 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
self.print_after_context(upto);
}
}
if self.match_count > 0 {
if self.match_line_count > 0 {
if self.opts.count {
self.printer.path_count(self.path, self.match_count);
self.printer.path_count(self.path, self.match_line_count);
} else if self.opts.count_matches {
self.printer.path_count(self.path, self.match_count.unwrap());
} else if self.opts.files_with_matches {
self.printer.path(self.path);
}
} else if self.opts.files_without_matches {
self.printer.path(self.path);
}
Ok(self.match_count)
Ok(self.match_line_count)
}
#[inline(always)]
fn terminate(&self) -> bool {
self.opts.terminate(self.match_count)
self.opts.terminate(self.match_line_count)
}
#[inline(always)]
fn fill(&mut self) -> Result<bool, Error> {
let keep = if self.opts.before_context > 0 || self.opts.after_context > 0 {
let lines = 1 + cmp::max(
self.opts.before_context, self.opts.after_context);
start_of_previous_lines(
self.opts.eol,
&self.inp.buf,
self.inp.lastnl.saturating_sub(1),
lines)
} else {
self.inp.lastnl
};
let keep =
if self.opts.before_context > 0 || self.opts.after_context > 0 {
let lines = 1 + cmp::max(
self.opts.before_context, self.opts.after_context);
start_of_previous_lines(
self.opts.eol,
&self.inp.buf,
self.inp.lastnl.saturating_sub(1),
lines)
} else {
self.inp.lastnl
};
if keep < self.last_printed {
self.last_printed -= keep;
} else {
@@ -349,6 +381,7 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
self.count_lines(keep);
self.last_line = 0;
}
self.count_byte_offset(keep);
let ok = self.inp.fill(&mut self.haystack, keep).map_err(|err| {
Error::from_io(err, &self.path)
})?;
@@ -410,7 +443,8 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
#[inline(always)]
fn print_match(&mut self, start: usize, end: usize) {
self.match_count += 1;
self.match_line_count += 1;
self.count_individual_matches(start, end);
if self.opts.skip_matches() {
return;
}
@@ -419,7 +453,7 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
self.add_line(end);
self.printer.matched(
self.grep.regex(), self.path,
&self.inp.buf, start, end, self.line_count);
&self.inp.buf, start, end, self.line_count, self.byte_offset);
self.last_printed = end;
self.after_context_remaining = self.opts.after_context;
}
@@ -429,7 +463,8 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
self.count_lines(start);
self.add_line(end);
self.printer.context(
&self.path, &self.inp.buf, start, end, self.line_count);
&self.path, &self.inp.buf, start, end,
self.line_count, self.byte_offset);
self.last_printed = end;
}
@@ -447,6 +482,22 @@ impl<'a, R: io::Read, W: WriteColor> Searcher<'a, R, W> {
}
}
#[inline(always)]
fn count_byte_offset(&mut self, buf_last_end: usize) {
if let Some(ref mut byte_offset) = self.byte_offset {
*byte_offset += buf_last_end as u64;
}
}
#[inline(always)]
fn count_individual_matches(&mut self, start: usize, end: usize) {
if let Some(ref mut count) = self.match_count {
for _ in self.grep.regex().find_iter(&self.inp.buf[start..end]) {
*count += 1;
}
}
}
#[inline(always)]
fn count_lines(&mut self, upto: usize) {
if let Some(ref mut line_count) = self.line_count {
@@ -1006,6 +1057,48 @@ fn main() {
assert_eq!(out, "/baz.rs:2\n");
}
#[test]
fn byte_offset() {
let (_, out) = search_smallcap(
"Sherlock", SHERLOCK, |s| s.byte_offset(true));
assert_eq!(out, "\
/baz.rs:0:For the Doctor Watsons of this world, as opposed to the Sherlock
/baz.rs:129:be, to a very large extent, the result of luck. Sherlock Holmes
");
}
#[test]
fn byte_offset_with_before_context() {
let (_, out) = search_smallcap("dusted", SHERLOCK, |s| {
s.line_number(true).byte_offset(true).before_context(2)
});
assert_eq!(out, "\
/baz.rs-3-129-be, to a very large extent, the result of luck. Sherlock Holmes
/baz.rs-4-193-can extract a clew from a wisp of straw or a flake of cigar ash;
/baz.rs:5:258:but Doctor Watson has to have it taken out for him and dusted,
");
}
#[test]
fn byte_offset_inverted() {
let (_, out) = search_smallcap("Sherlock", SHERLOCK, |s| {
s.invert_match(true).byte_offset(true)
});
assert_eq!(out, "\
/baz.rs:65:Holmeses, success in the province of detective work must always
/baz.rs:193:can extract a clew from a wisp of straw or a flake of cigar ash;
/baz.rs:258:but Doctor Watson has to have it taken out for him and dusted,
/baz.rs:321:and exhibited clearly, with a label attached.
");
}
#[test]
fn count_matches() {
let (_, out) = search_smallcap(
"the", SHERLOCK, |s| s.count_matches(true));
assert_eq!(out, "/baz.rs:4\n");
}
#[test]
fn files_with_matches() {
let (count, out) = search_smallcap(

View File

@@ -11,6 +11,15 @@ enum State {
Literal,
}
/// Escapes an arbitrary byte slice such that it can be presented as a human
/// readable string.
pub fn escape(bytes: &[u8]) -> String {
use std::ascii::escape_default;
let escaped = bytes.iter().flat_map(|&b| escape_default(b)).collect();
String::from_utf8(escaped).unwrap()
}
/// Unescapes a string given on the command line. It supports a limited set of
/// escape sequences:
///

View File

@@ -1,6 +1,6 @@
use std::fs::File;
use std::io;
use std::path::Path;
use std::path::{Path, PathBuf};
use encoding_rs::Encoding;
use grep::Grep;
@@ -8,8 +8,10 @@ use ignore::DirEntry;
use memmap::Mmap;
use termcolor::WriteColor;
use decoder::DecodeReader;
// use decoder::DecodeReader;
use encoding_rs_io::DecodeReaderBytesBuilder;
use decompressor::{self, DecompressionReader};
use preprocessor::PreprocessorReader;
use pathutil::strip_prefix;
use printer::Printer;
use search_buffer::BufferSearcher;
@@ -33,7 +35,9 @@ struct Options {
encoding: Option<&'static Encoding>,
after_context: usize,
before_context: usize,
byte_offset: bool,
count: bool,
count_matches: bool,
files_with_matches: bool,
files_without_matches: bool,
eol: u8,
@@ -43,6 +47,7 @@ struct Options {
no_messages: bool,
quiet: bool,
text: bool,
preprocessor: Option<PathBuf>,
search_zip_files: bool
}
@@ -53,7 +58,9 @@ impl Default for Options {
encoding: None,
after_context: 0,
before_context: 0,
byte_offset: false,
count: false,
count_matches: false,
files_with_matches: false,
files_without_matches: false,
eol: b'\n',
@@ -64,6 +71,7 @@ impl Default for Options {
quiet: false,
text: false,
search_zip_files: false,
preprocessor: None,
}
}
}
@@ -106,6 +114,16 @@ impl WorkerBuilder {
self
}
/// If enabled, searching will print a 0-based offset of the
/// matching line (or the actual match if -o is specified) before
/// printing the line itself.
///
/// Disabled by default.
pub fn byte_offset(mut self, yes: bool) -> Self {
self.opts.byte_offset = yes;
self
}
/// If enabled, searching will print a count instead of each match.
///
/// Disabled by default.
@@ -114,6 +132,15 @@ impl WorkerBuilder {
self
}
/// If enabled, searching will print the count of individual matches
/// instead of each match.
///
/// Disabled by default.
pub fn count_matches(mut self, yes: bool) -> Self {
self.opts.count_matches = yes;
self
}
/// Set the encoding to use to read each file.
///
/// If the encoding is `None` (the default), then the encoding is
@@ -199,6 +226,12 @@ impl WorkerBuilder {
self.opts.search_zip_files = yes;
self
}
/// If non-empty, search output of preprocessor run on each file
pub fn preprocessor(mut self, command: Option<PathBuf>) -> Self {
self.opts.preprocessor = command;
self
}
}
/// Worker is responsible for executing searches on file paths, while choosing
@@ -227,7 +260,18 @@ impl Worker {
}
Work::DirEntry(dent) => {
let mut path = dent.path();
if self.opts.search_zip_files
if self.opts.preprocessor.is_some() {
let cmd = self.opts.preprocessor.clone().unwrap();
match PreprocessorReader::from_cmd_path(cmd, path) {
Ok(reader) => self.search(printer, path, reader),
Err(err) => {
if !self.opts.no_messages {
eprintln!("{}", err);
}
return 0;
}
}
} else if self.opts.search_zip_files
&& decompressor::is_compressed(path)
{
match DecompressionReader::from_path(path) {
@@ -276,14 +320,18 @@ impl Worker {
path: &Path,
rdr: R,
) -> Result<u64> {
let rdr = DecodeReader::new(
rdr, &mut self.decodebuf, self.opts.encoding);
let rdr = DecodeReaderBytesBuilder::new()
.encoding(self.opts.encoding)
.utf8_passthru(true)
.build_with_buffer(rdr, &mut self.decodebuf)?;
let searcher = Searcher::new(
&mut self.inpbuf, printer, &self.grep, path, rdr);
searcher
.after_context(self.opts.after_context)
.before_context(self.opts.before_context)
.byte_offset(self.opts.byte_offset)
.count(self.opts.count)
.count_matches(self.opts.count_matches)
.files_with_matches(self.opts.files_with_matches)
.files_without_matches(self.opts.files_without_matches)
.eol(self.opts.eol)
@@ -322,7 +370,9 @@ impl Worker {
}
let searcher = BufferSearcher::new(printer, &self.grep, path, buf);
Ok(searcher
.byte_offset(self.opts.byte_offset)
.count(self.opts.count)
.count_matches(self.opts.count_matches)
.files_with_matches(self.opts.files_with_matches)
.files_without_matches(self.opts.files_without_matches)
.eol(self.opts.eol)
@@ -341,14 +391,17 @@ impl Worker {
#[cfg(unix)]
fn mmap(&self, file: &File) -> Result<Option<Mmap>> {
use libc::{ENODEV, EOVERFLOW};
use libc::{EOVERFLOW, ENODEV, ENOMEM};
let err = match mmap_readonly(file) {
Ok(mmap) => return Ok(Some(mmap)),
Err(err) => err,
};
let code = err.raw_os_error();
if code == Some(ENODEV) || code == Some(EOVERFLOW) {
if code == Some(EOVERFLOW)
|| code == Some(ENODEV)
|| code == Some(ENOMEM)
{
return Ok(None);
}
Err(From::from(err))

View File

@@ -1,3 +0,0 @@
This project is dual-licensed under the Unlicense and MIT licenses.
You may use this code under the terms of either license.

View File

@@ -1,20 +0,0 @@
[package]
name = "termcolor"
version = "0.3.5" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
A simple cross platform library for writing colored text to a terminal.
"""
documentation = "https://docs.rs/termcolor"
homepage = "https://github.com/BurntSushi/ripgrep/tree/master/termcolor"
repository = "https://github.com/BurntSushi/ripgrep/tree/master/termcolor"
readme = "README.md"
keywords = ["windows", "win", "color", "ansi", "console"]
license = "Unlicense/MIT"
[lib]
name = "termcolor"
bench = false
[target.'cfg(windows)'.dependencies]
wincolor = { version = "0.1.6", path = "../wincolor" }

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 Andrew Gallant
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,86 +1,2 @@
termcolor
=========
A simple cross platform library for writing colored text to a terminal. This
library writes colored text either using standard ANSI escape sequences or
by interacting with the Windows console. Several convenient abstractions
are provided for use in single-threaded or multi-threaded command line
applications.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.png)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/termcolor.svg)](https://crates.io/crates/termcolor)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/termcolor](https://docs.rs/termcolor)
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
termcolor = "0.3"
```
and this to your crate root:
```rust
extern crate termcolor;
```
### Organization
The `WriteColor` trait extends the `io::Write` trait with methods for setting
colors or resetting them.
`StandardStream` and `StandardStreamLock` both satisfy `WriteColor` and are
analogous to `std::io::Stdout` and `std::io::StdoutLock`, or `std::io::Stderr`
and `std::io::StderrLock`.
`Buffer` is an in memory buffer that supports colored text. In a parallel
program, each thread might write to its own buffer. A buffer can be printed to
stdout or stderr using a `BufferWriter`. The advantage of this design is that
each thread can work in parallel on a buffer without having to synchronize
access to global resources such as the Windows console. Moreover, this design
also prevents interleaving of buffer output.
`Ansi` and `NoColor` both satisfy `WriteColor` for arbitrary implementors of
`io::Write`. These types are useful when you know exactly what you need. An
analogous type for the Windows console is not provided since it cannot exist.
### Example: using `StandardStream`
The `StandardStream` type in this crate works similarly to `std::io::Stdout`,
except it is augmented with methods for coloring by the `WriteColor` trait.
For example, to write some green text:
```rust
use std::io::Write;
use termcolor::{Color, ColorChoice, ColorSpec, StandardStream, WriteColor};
let mut stdout = StandardStream::stdout(ColorChoice::Always);
stdout.set_color(ColorSpec::new().set_fg(Some(Color::Green)))?;
writeln!(&mut stdout, "green text!")?;
```
### Example: using `BufferWriter`
A `BufferWriter` can create buffers and write buffers to stdout or stderr. It
does *not* implement `io::Write` or `WriteColor` itself. Instead, `Buffer`
implements `io::Write` and `io::WriteColor`.
This example shows how to print some green text to stderr.
```rust
use std::io::Write;
use termcolor::{BufferWriter, Color, ColorChoice, ColorSpec, WriteColor};
let mut bufwtr = BufferWriter::stderr(ColorChoice::Always);
let mut buffer = bufwtr.buffer();
buffer.set_color(ColorSpec::new().set_fg(Some(Color::Green)))?;
writeln!(&mut buffer, "green text!")?;
bufwtr.print(&buffer)?;
```
termcolor has moved to its own repository:
https://github.com/BurntSushi/termcolor

View File

@@ -1,24 +0,0 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

File diff suppressed because it is too large Load Diff

BIN
tests/data/sherlock.lz4 Normal file

Binary file not shown.

View File

@@ -107,22 +107,6 @@ sherlock!(line_numbers, |wd: WorkDir, mut cmd: Command| {
assert_eq!(lines, expected);
});
sherlock!(line_number_width, |wd: WorkDir, mut cmd: Command| {
cmd.arg("-n");
cmd.arg("--line-number-width").arg("2");
let lines: String = wd.stdout(&mut cmd);
let expected = " 1:For the Doctor Watsons of this world, as opposed to the Sherlock
3:be, to a very large extent, the result of luck. Sherlock Holmes
";
assert_eq!(lines, expected);
});
sherlock!(line_number_width_padding_character_error, |wd: WorkDir, mut cmd: Command| {
cmd.arg("-n");
cmd.arg("--line-number-width").arg("02");
wd.assert_non_empty_stderr(&mut cmd);
});
sherlock!(columns, |wd: WorkDir, mut cmd: Command| {
cmd.arg("--column");
let lines: String = wd.stdout(&mut cmd);
@@ -395,6 +379,16 @@ sherlock!(csglob, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
assert_eq!(lines, "file2.html:Sherlock\n");
});
sherlock!(byte_offset_only_matching, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
cmd.arg("-b").arg("-o");
let lines: String = wd.stdout(&mut cmd);
let expected = "\
sherlock:56:Sherlock
sherlock:177:Sherlock
";
assert_eq!(lines, expected);
});
sherlock!(count, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
cmd.arg("--count");
let lines: String = wd.stdout(&mut cmd);
@@ -402,6 +396,27 @@ sherlock!(count, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
assert_eq!(lines, expected);
});
sherlock!(count_matches, "the", ".", |wd: WorkDir, mut cmd: Command| {
cmd.arg("--count-matches");
let lines: String = wd.stdout(&mut cmd);
let expected = "sherlock:4\n";
assert_eq!(lines, expected);
});
sherlock!(count_matches_inverted, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
cmd.arg("--count-matches").arg("--invert-match");
let lines: String = wd.stdout(&mut cmd);
let expected = "sherlock:4\n";
assert_eq!(lines, expected);
});
sherlock!(count_matches_via_only, "the", ".", |wd: WorkDir, mut cmd: Command| {
cmd.arg("--count").arg("--only-matching");
let lines: String = wd.stdout(&mut cmd);
let expected = "sherlock:4\n";
assert_eq!(lines, expected);
});
sherlock!(files_with_matches, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
cmd.arg("--files-with-matches");
let lines: String = wd.stdout(&mut cmd);
@@ -575,6 +590,7 @@ sherlock!(no_ignore_hidden, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
});
sherlock!(ignore_git, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", "sherlock\n");
wd.assert_err(&mut cmd);
});
@@ -620,7 +636,7 @@ sherlock!(ignore_git_parent_stop, "Sherlock", ".",
//
// .gitignore (contains `sherlock`)
// foo/
// .git
// .git/
// bar/
// sherlock
//
@@ -643,6 +659,39 @@ sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
assert_eq!(lines, expected);
});
// Like ignore_git_parent_stop, but with a .git file instead of a .git
// directory.
sherlock!(ignore_git_parent_stop_file, "Sherlock", ".",
|wd: WorkDir, mut cmd: Command| {
// This tests that searching parent directories for .gitignore files stops
// after it sees a .git *file*. A .git file is used for submodules. To test
// this, we create this directory hierarchy:
//
// .gitignore (contains `sherlock`)
// foo/
// .git
// bar/
// sherlock
//
// And we perform the search inside `foo/bar/`. ripgrep will stop looking
// for .gitignore files after it sees `foo/.git`, and therefore not
// respect the top-level `.gitignore` containing `sherlock`.
wd.remove("sherlock");
wd.create(".gitignore", "sherlock\n");
wd.create_dir("foo");
wd.create("foo/.git", "");
wd.create_dir("foo/bar");
wd.create("foo/bar/sherlock", hay::SHERLOCK);
cmd.current_dir(wd.path().join("foo").join("bar"));
let lines: String = wd.stdout(&mut cmd);
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
assert_eq!(lines, expected);
});
sherlock!(ignore_ripgrep_parent_no_stop, "Sherlock", ".",
|wd: WorkDir, mut cmd: Command| {
// This is like the `ignore_git_parent_stop` test, except it checks that
@@ -662,6 +711,7 @@ sherlock!(no_parent_ignore_git, "Sherlock", ".",
|wd: WorkDir, mut cmd: Command| {
// Set up a directory hierarchy like this:
//
// .git/
// .gitignore
// foo/
// .gitignore
@@ -679,6 +729,7 @@ sherlock!(no_parent_ignore_git, "Sherlock", ".",
// In other words, we should only see results from `sherlock`, not from
// `watson`.
wd.remove("sherlock");
wd.create_dir(".git");
wd.create(".gitignore", "sherlock\n");
wd.create_dir("foo");
wd.create("foo/.gitignore", "watson\n");
@@ -772,8 +823,37 @@ sherlock:5:12:but Doctor Watson has to have it taken out for him and dusted,
assert_eq!(lines, expected);
});
sherlock!(vimgrep_no_line, "Sherlock|Watson", ".",
|wd: WorkDir, mut cmd: Command| {
cmd.arg("--vimgrep").arg("-N");
let lines: String = wd.stdout(&mut cmd);
let expected = "\
sherlock:16:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:57:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:49:be, to a very large extent, the result of luck. Sherlock Holmes
sherlock:12:but Doctor Watson has to have it taken out for him and dusted,
";
assert_eq!(lines, expected);
});
sherlock!(vimgrep_no_line_no_column, "Sherlock|Watson", ".",
|wd: WorkDir, mut cmd: Command| {
cmd.arg("--vimgrep").arg("-N").arg("--no-column");
let lines: String = wd.stdout(&mut cmd);
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
sherlock:but Doctor Watson has to have it taken out for him and dusted,
";
assert_eq!(lines, expected);
});
// See: https://github.com/BurntSushi/ripgrep/issues/16
clean!(regression_16, "xyz", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", "ghi/");
wd.create_dir("ghi");
wd.create_dir("def/ghi");
@@ -800,11 +880,7 @@ clean!(regression_25, "test", ".", |wd: WorkDir, mut cmd: Command| {
// See: https://github.com/BurntSushi/ripgrep/issues/30
clean!(regression_30, "test", ".", |wd: WorkDir, mut cmd: Command| {
if cfg!(windows) {
wd.create(".gitignore", "vendor/**\n!vendor\\manifest");
} else {
wd.create(".gitignore", "vendor/**\n!vendor/manifest");
}
wd.create(".gitignore", "vendor/**\n!vendor/manifest");
wd.create_dir("vendor");
wd.create("vendor/manifest", "test");
@@ -833,6 +909,7 @@ clean!(regression_50, "xyz", ".", |wd: WorkDir, mut cmd: Command| {
// See: https://github.com/BurntSushi/ripgrep/issues/65
clean!(regression_65, "xyz", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", "a/");
wd.create_dir("a");
wd.create("a/foo", "xyz");
@@ -842,6 +919,7 @@ clean!(regression_65, "xyz", ".", |wd: WorkDir, mut cmd: Command| {
// See: https://github.com/BurntSushi/ripgrep/issues/67
clean!(regression_67, "test", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", "/*\n!/dir");
wd.create_dir("dir");
wd.create_dir("foo");
@@ -854,6 +932,7 @@ clean!(regression_67, "test", ".", |wd: WorkDir, mut cmd: Command| {
// See: https://github.com/BurntSushi/ripgrep/issues/87
clean!(regression_87, "test", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", "foo\n**no-vcs**");
wd.create("foo", "test");
wd.assert_err(&mut cmd);
@@ -861,6 +940,7 @@ clean!(regression_87, "test", ".", |wd: WorkDir, mut cmd: Command| {
// See: https://github.com/BurntSushi/ripgrep/issues/90
clean!(regression_90, "test", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", "!.foo");
wd.create(".foo", "test");
@@ -921,6 +1001,7 @@ clean!(regression_127, "Sherlock", ".", |wd: WorkDir, mut cmd: Command| {
// ripgrep should ignore 'foo/sherlock' giving us results only from
// 'foo/watson' but on Windows ripgrep will include both 'foo/sherlock' and
// 'foo/watson' in the search results.
wd.create_dir(".git");
wd.create(".gitignore", "foo/sherlock\n");
wd.create_dir("foo");
wd.create("foo/sherlock", hay::SHERLOCK);
@@ -948,6 +1029,7 @@ clean!(regression_128, "x", ".", |wd: WorkDir, mut cmd: Command| {
// TODO(burntsushi): Darwin doesn't like this test for some reason.
#[cfg(not(target_os = "macos"))]
clean!(regression_131, "test", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", "TopÑapa");
wd.create("TopÑapa", "test");
wd.assert_err(&mut cmd);
@@ -1235,6 +1317,7 @@ clean!(regression_599, "^$", "input.txt", |wd: WorkDir, mut cmd: Command| {
// See: https://github.com/BurntSushi/ripgrep/issues/807
clean!(regression_807, "test", ".", |wd: WorkDir, mut cmd: Command| {
wd.create_dir(".git");
wd.create(".gitignore", ".a/b");
wd.create_dir(".a/b");
wd.create_dir(".a/c");
@@ -1246,6 +1329,12 @@ clean!(regression_807, "test", ".", |wd: WorkDir, mut cmd: Command| {
assert_eq!(lines, format!("{}:test\n", path(".a/c/file")));
});
// See: https://github.com/BurntSushi/ripgrep/issues/900
sherlock!(regression_900, "-fpat", "sherlock", |wd: WorkDir, mut cmd: Command| {
wd.create("pat", "");
wd.assert_err(&mut cmd);
});
// See: https://github.com/BurntSushi/ripgrep/issues/1
clean!(feature_1_sjis, "Шерлок Холмс", ".", |wd: WorkDir, mut cmd: Command| {
let sherlock =
@@ -1660,15 +1749,25 @@ sherlock!(feature_419_zero_as_shortcut_for_null, "Sherlock", ".",
assert_eq!(lines, "sherlock\x002\n");
});
// See: https://github.com/BurntSushi/ripgrep/issues/709
clean!(suggest_fixed_strings_for_invalid_regex, "foo(", ".",
|wd: WorkDir, mut cmd: Command| {
wd.assert_non_empty_stderr(&mut cmd);
#[test]
fn preprocessing() {
if !cmd_exists("xzcat") {
return;
}
let xz_file = include_bytes!("./data/sherlock.xz");
let output = cmd.output().unwrap();
let err = String::from_utf8_lossy(&output.stderr);
assert_eq!(err.contains("--fixed-strings"), true);
});
let wd = WorkDir::new("feature_preprocessing");
wd.create_bytes("sherlock.xz", xz_file);
let mut cmd = wd.command();
cmd.arg("--pre").arg("xzcat").arg("Sherlock").arg("sherlock.xz");
let lines: String = wd.stdout(&mut cmd);
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
assert_eq!(lines, expected);
}
#[test]
fn compressed_gzip() {
@@ -1730,6 +1829,26 @@ be, to a very large extent, the result of luck. Sherlock Holmes
assert_eq!(lines, expected);
}
#[test]
fn compressed_lz4() {
if !cmd_exists("lz4") {
return;
}
let lz4_file = include_bytes!("./data/sherlock.lz4");
let wd = WorkDir::new("feature_search_compressed");
wd.create_bytes("sherlock.lz4", lz4_file);
let mut cmd = wd.command();
cmd.arg("-z").arg("Sherlock").arg("sherlock.lz4");
let lines: String = wd.stdout(&mut cmd);
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
assert_eq!(lines, expected);
}
#[test]
fn compressed_lzma() {
if !cmd_exists("xz") {
@@ -1765,7 +1884,7 @@ fn compressed_failing_gzip() {
let output = cmd.output().unwrap();
let err = String::from_utf8_lossy(&output.stderr);
assert_eq!(err.contains("not in gzip format"), true);
assert!(!err.is_empty());
}
sherlock!(feature_196_persistent_config, "sherlock",
@@ -1784,6 +1903,51 @@ be, to a very large extent, the result of luck. Sherlock Holmes
assert_eq!(lines, expected);
});
sherlock!(feature_411_single_threaded_search_stats,
|wd: WorkDir, mut cmd: Command| {
cmd.arg("--stats");
let lines: String = wd.stdout(&mut cmd);
assert_eq!(lines.contains("2 matched lines"), true);
assert_eq!(lines.contains("1 files contained matches"), true);
assert_eq!(lines.contains("1 files searched"), true);
assert_eq!(lines.contains("seconds"), true);
});
#[test]
fn feature_411_parallel_search_stats() {
let wd = WorkDir::new("feature_411");
wd.create("sherlock_1", hay::SHERLOCK);
wd.create("sherlock_2", hay::SHERLOCK);
let mut cmd = wd.command();
cmd.arg("--stats");
cmd.arg("Sherlock");
cmd.arg("./");
let lines: String = wd.stdout(&mut cmd);
assert_eq!(lines.contains("4 matched lines"), true);
assert_eq!(lines.contains("2 files contained matches"), true);
assert_eq!(lines.contains("2 files searched"), true);
assert_eq!(lines.contains("seconds"), true);
}
sherlock!(feature_411_ignore_stats_1, |wd: WorkDir, mut cmd: Command| {
cmd.arg("--files-with-matches");
cmd.arg("--stats");
let lines: String = wd.stdout(&mut cmd);
assert_eq!(lines.contains("seconds"), false);
});
sherlock!(feature_411_ignore_stats_2, |wd: WorkDir, mut cmd: Command| {
cmd.arg("--files-without-match");
cmd.arg("--stats");
let lines: String = wd.stdout(&mut cmd);
assert_eq!(lines.contains("seconds"), false);
});
#[test]
fn feature_740_passthru() {
let wd = WorkDir::new("feature_740");
@@ -1915,7 +2079,7 @@ fn regression_270() {
wd.create("foo", "-test");
let mut cmd = wd.command();
cmd.arg("-e").arg("-test");
cmd.arg("-e").arg("-test").arg("./");
let lines: String = wd.stdout(&mut cmd);
assert_eq!(lines, path("foo:-test\n"));
}
@@ -2064,7 +2228,7 @@ fn regression_693_context_option_in_contextless_mode() {
wd.create("bar", "xyz\n");
let mut cmd = wd.command();
cmd.arg("-C1").arg("-c").arg("--sort-files").arg("xyz");
cmd.arg("-C1").arg("-c").arg("--sort-files").arg("xyz").arg("./");
let lines: String = wd.stdout(&mut cmd);
let expected = "\
@@ -2084,3 +2248,33 @@ fn type_list() {
// This can change over time, so just make sure we print something.
assert!(!lines.is_empty());
}
// See: https://github.com/BurntSushi/ripgrep/issues/948
sherlock!(
exit_code_match_success,
".",
".",
|wd: WorkDir, mut cmd: Command| {
wd.assert_exit_code(0, &mut cmd);
}
);
// See: https://github.com/BurntSushi/ripgrep/issues/948
sherlock!(
exit_code_no_match,
"6d28e48b5224a42b167e{10}",
".",
|wd: WorkDir, mut cmd: Command| {
wd.assert_exit_code(1, &mut cmd);
}
);
// See: https://github.com/BurntSushi/ripgrep/issues/948
sherlock!(
exit_code_error,
"*",
".",
|wd: WorkDir, mut cmd: Command| {
wd.assert_exit_code(2, &mut cmd);
}
);

View File

@@ -21,7 +21,8 @@ pub struct WorkDir {
/// The directory in which this test executable is running.
root: PathBuf,
/// The directory in which the test should run. If a test needs to create
/// files, they should go in here.
/// files, they should go in here. This directory is also used as the CWD
/// for any processes created by the test.
dir: PathBuf,
}
@@ -31,9 +32,15 @@ impl WorkDir {
/// to a logical grouping of tests.
pub fn new(name: &str) -> WorkDir {
let id = NEXT_ID.fetch_add(1, Ordering::SeqCst);
let root = env::current_exe().unwrap()
.parent().expect("executable's directory").to_path_buf();
let dir = root.join(TEST_DIR).join(name).join(&format!("{}", id));
let root = env::current_exe()
.unwrap()
.parent()
.expect("executable's directory")
.to_path_buf();
let dir = env::temp_dir()
.join(TEST_DIR)
.join(name)
.join(&format!("{}", id));
nice_err(&dir, repeat(|| fs::create_dir_all(&dir)));
WorkDir {
root: root,
@@ -49,7 +56,11 @@ impl WorkDir {
/// Try to create a new file with the given name and contents in this
/// directory.
pub fn try_create<P: AsRef<Path>>(&self, name: P, contents: &str) -> io::Result<()> {
pub fn try_create<P: AsRef<Path>>(
&self,
name: P,
contents: &str,
) -> io::Result<()> {
let path = self.dir.join(name);
self.try_create_bytes(path, contents.as_bytes())
}
@@ -70,7 +81,11 @@ impl WorkDir {
/// Try to create a new file with the given name and contents in this
/// directory.
fn try_create_bytes<P: AsRef<Path>>(&self, path: P, contents: &[u8]) -> io::Result<()> {
fn try_create_bytes<P: AsRef<Path>>(
&self,
path: P,
contents: &[u8],
) -> io::Result<()> {
let mut file = File::create(&path)?;
file.write_all(contents)?;
file.flush()
@@ -99,28 +114,11 @@ impl WorkDir {
}
/// Returns the path to the ripgrep executable.
#[cfg(not(windows))]
pub fn bin(&self) -> PathBuf {
let path = self.root.join("rg");
if !path.is_file() {
// Looks like a recent version of Cargo changed the cwd or the
// location of the test executable.
self.root.join("../rg")
} else {
path
}
}
/// Returns the path to the ripgrep executable.
#[cfg(windows)]
pub fn bin(&self) -> PathBuf {
let path = self.root.join("rg.exe");
if !path.is_file() {
// Looks like a recent version of Cargo changed the cwd or the
// location of the test executable.
if cfg!(windows) {
self.root.join("../rg.exe")
} else {
path
self.root.join("../rg")
}
}
@@ -190,7 +188,11 @@ impl WorkDir {
match stdout.parse() {
Ok(t) => t,
Err(err) => {
panic!("could not convert from string: {:?}\n\n{}", err, stdout);
panic!(
"could not convert from string: {:?}\n\n{}",
err,
stdout
);
}
}
}
@@ -221,7 +223,10 @@ impl WorkDir {
write!(stdin, "{}", input)
});
let output = self.expect_success(cmd, child.wait_with_output().unwrap());
let output = self.expect_success(
cmd,
child.wait_with_output().unwrap(),
);
worker.join().unwrap().unwrap();
output
}
@@ -261,18 +266,42 @@ impl WorkDir {
pub fn assert_err(&self, cmd: &mut process::Command) {
let o = cmd.output().unwrap();
if o.status.success() {
panic!("\n\n===== {:?} =====\n\
command succeeded but expected failure!\
\n\ncwd: {}\
\n\nstatus: {}\
\n\nstdout: {}\n\nstderr: {}\
\n\n=====\n",
cmd, self.dir.display(), o.status,
String::from_utf8_lossy(&o.stdout),
String::from_utf8_lossy(&o.stderr));
panic!(
"\n\n===== {:?} =====\n\
command succeeded but expected failure!\
\n\ncwd: {}\
\n\nstatus: {}\
\n\nstdout: {}\n\nstderr: {}\
\n\n=====\n",
cmd,
self.dir.display(),
o.status,
String::from_utf8_lossy(&o.stdout),
String::from_utf8_lossy(&o.stderr)
);
}
}
/// Runs the given command and asserts that its exit code matches expected
/// exit code.
pub fn assert_exit_code(
&self,
expected_code: i32,
cmd: &mut process::Command,
) {
let code = cmd.status().unwrap().code().unwrap();
assert_eq!(
expected_code, code,
"\n\n===== {:?} =====\n\
expected exit code did not match\
\n\nexpected: {}\
\n\nfound: {}\
\n\n=====\n",
cmd, expected_code, code
);
}
/// Runs the given command and asserts that something was printed to
/// stderr.
pub fn assert_non_empty_stderr(&self, cmd: &mut process::Command) {

View File

@@ -1,3 +0,0 @@
This project is dual-licensed under the Unlicense and MIT licenses.
You may use this code under the terms of either license.

View File

@@ -1,21 +0,0 @@
[package]
name = "wincolor"
version = "0.1.6" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
A simple Windows specific API for controlling text color in a Windows console.
"""
documentation = "https://docs.rs/wincolor"
homepage = "https://github.com/BurntSushi/ripgrep/tree/master/wincolor"
repository = "https://github.com/BurntSushi/ripgrep/tree/master/wincolor"
readme = "README.md"
keywords = ["windows", "win", "color", "ansi", "console"]
license = "Unlicense/MIT"
[lib]
name = "wincolor"
bench = false
[dependencies.winapi]
version = "0.3"
features = ["consoleapi", "minwindef", "processenv", "winbase", "wincon"]

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 Andrew Gallant
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,44 +1,2 @@
wincolor
========
A simple Windows specific API for controlling text color in a Windows console.
The purpose of this crate is to expose the full inflexibility of the Windows
console without any platform independent abstraction.
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/wincolor.svg)](https://crates.io/crates/wincolor)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/wincolor](https://docs.rs/wincolor)
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
wincolor = "0.1"
```
and this to your crate root:
```rust
extern crate wincolor;
```
### Example
This is a simple example that shows how to write text with a foreground color
of cyan and the intense attribute set:
```rust
use wincolor::{Console, Color, Intense};
let mut con = Console::stdout().unwrap();
con.fg(Intense::Yes, Color::Cyan).unwrap();
println!("This text will be intense cyan.");
con.reset().unwrap();
println!("This text will be normal.");
```
wincolor has moved to the termcolor repository:
https://github.com/BurntSushi/termcolor

View File

@@ -1,24 +0,0 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

View File

@@ -1,33 +0,0 @@
/*!
This crate provides a safe and simple Windows specific API to control
text attributes in the Windows console. Text attributes are limited to
foreground/background colors, as well as whether to make colors intense or not.
Note that on non-Windows platforms, this crate is empty but will compile.
# Example
```no_run
# #[cfg(windows)]
# {
use wincolor::{Console, Color, Intense};
let mut con = Console::stdout().unwrap();
con.fg(Intense::Yes, Color::Cyan).unwrap();
println!("This text will be intense cyan.");
con.reset().unwrap();
println!("This text will be normal.");
# }
```
*/
#![deny(missing_docs)]
#[cfg(windows)]
extern crate winapi;
#[cfg(windows)]
pub use win::*;
#[cfg(windows)]
mod win;

View File

@@ -1,266 +0,0 @@
use std::io;
use std::mem;
use winapi::shared::minwindef::{DWORD, WORD};
use winapi::um::consoleapi;
use winapi::um::processenv;
use winapi::um::winbase::{STD_ERROR_HANDLE, STD_OUTPUT_HANDLE};
use winapi::um::wincon::{
self,
FOREGROUND_BLUE as FG_BLUE,
FOREGROUND_GREEN as FG_GREEN,
FOREGROUND_RED as FG_RED,
FOREGROUND_INTENSITY as FG_INTENSITY,
};
const FG_CYAN: WORD = FG_BLUE | FG_GREEN;
const FG_MAGENTA: WORD = FG_BLUE | FG_RED;
const FG_YELLOW: WORD = FG_GREEN | FG_RED;
const FG_WHITE: WORD = FG_BLUE | FG_GREEN | FG_RED;
/// A Windows console.
///
/// This represents a very limited set of functionality available to a Windows
/// console. In particular, it can only change text attributes such as color
/// and intensity.
///
/// There is no way to "write" to this console. Simply write to
/// stdout or stderr instead, while interleaving instructions to the console
/// to change text attributes.
///
/// A common pitfall when using a console is to forget to flush writes to
/// stdout before setting new text attributes.
#[derive(Debug)]
pub struct Console {
handle_id: DWORD,
start_attr: TextAttributes,
cur_attr: TextAttributes,
}
impl Console {
/// Get a console for a standard I/O stream.
fn create_for_stream(handle_id: DWORD) -> io::Result<Console> {
let mut info = unsafe { mem::zeroed() };
let res = unsafe {
let handle = processenv::GetStdHandle(handle_id);
wincon::GetConsoleScreenBufferInfo(handle, &mut info)
};
if res == 0 {
return Err(io::Error::last_os_error());
}
let attr = TextAttributes::from_word(info.wAttributes);
Ok(Console {
handle_id: handle_id,
start_attr: attr,
cur_attr: attr,
})
}
/// Create a new Console to stdout.
///
/// If there was a problem creating the console, then an error is returned.
pub fn stdout() -> io::Result<Console> {
Self::create_for_stream(STD_OUTPUT_HANDLE)
}
/// Create a new Console to stderr.
///
/// If there was a problem creating the console, then an error is returned.
pub fn stderr() -> io::Result<Console> {
Self::create_for_stream(STD_ERROR_HANDLE)
}
/// Applies the current text attributes.
fn set(&mut self) -> io::Result<()> {
let attr = self.cur_attr.to_word();
let res = unsafe {
let handle = processenv::GetStdHandle(self.handle_id);
wincon::SetConsoleTextAttribute(handle, attr)
};
if res == 0 {
return Err(io::Error::last_os_error());
}
Ok(())
}
/// Apply the given intensity and color attributes to the console
/// foreground.
///
/// If there was a problem setting attributes on the console, then an error
/// is returned.
pub fn fg(&mut self, intense: Intense, color: Color) -> io::Result<()> {
self.cur_attr.fg_color = color;
self.cur_attr.fg_intense = intense;
self.set()
}
/// Apply the given intensity and color attributes to the console
/// background.
///
/// If there was a problem setting attributes on the console, then an error
/// is returned.
pub fn bg(&mut self, intense: Intense, color: Color) -> io::Result<()> {
self.cur_attr.bg_color = color;
self.cur_attr.bg_intense = intense;
self.set()
}
/// Reset the console text attributes to their original settings.
///
/// The original settings correspond to the text attributes on the console
/// when this `Console` value was created.
///
/// If there was a problem setting attributes on the console, then an error
/// is returned.
pub fn reset(&mut self) -> io::Result<()> {
self.cur_attr = self.start_attr;
self.set()
}
/// Toggle virtual terminal processing.
///
/// This method attempts to toggle virtual terminal processing for this
/// console. If there was a problem toggling it, then an error returned.
/// On success, the caller may assume that toggling it was successful.
///
/// When virtual terminal processing is enabled, characters emitted to the
/// console are parsed for VT100 and similar control character sequences
/// that control color and other similar operations.
pub fn set_virtual_terminal_processing(
&mut self,
yes: bool,
) -> io::Result<()> {
let vt = wincon::ENABLE_VIRTUAL_TERMINAL_PROCESSING;
let mut old_mode = 0;
let handle = unsafe { processenv::GetStdHandle(self.handle_id) };
if unsafe { consoleapi::GetConsoleMode(handle, &mut old_mode) } == 0 {
return Err(io::Error::last_os_error());
}
let new_mode =
if yes {
old_mode | vt
} else {
old_mode & !vt
};
if old_mode == new_mode {
return Ok(());
}
if unsafe { consoleapi::SetConsoleMode(handle, new_mode) } == 0 {
return Err(io::Error::last_os_error());
}
Ok(())
}
}
/// A representation of text attributes for the Windows console.
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
struct TextAttributes {
fg_color: Color,
fg_intense: Intense,
bg_color: Color,
bg_intense: Intense,
}
impl TextAttributes {
fn to_word(&self) -> WORD {
let mut w = 0;
w |= self.fg_color.to_fg();
w |= self.fg_intense.to_fg();
w |= self.bg_color.to_bg();
w |= self.bg_intense.to_bg();
w
}
fn from_word(word: WORD) -> TextAttributes {
TextAttributes {
fg_color: Color::from_fg(word),
fg_intense: Intense::from_fg(word),
bg_color: Color::from_bg(word),
bg_intense: Intense::from_bg(word),
}
}
}
/// Whether to use intense colors or not.
#[allow(missing_docs)]
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum Intense {
Yes,
No,
}
impl Intense {
fn to_bg(&self) -> WORD {
self.to_fg() << 4
}
fn from_bg(word: WORD) -> Intense {
Intense::from_fg(word >> 4)
}
fn to_fg(&self) -> WORD {
match *self {
Intense::No => 0,
Intense::Yes => FG_INTENSITY,
}
}
fn from_fg(word: WORD) -> Intense {
if word & FG_INTENSITY > 0 {
Intense::Yes
} else {
Intense::No
}
}
}
/// The set of available colors for use with a Windows console.
#[allow(missing_docs)]
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum Color {
Black,
Blue,
Green,
Red,
Cyan,
Magenta,
Yellow,
White,
}
impl Color {
fn to_bg(&self) -> WORD {
self.to_fg() << 4
}
fn from_bg(word: WORD) -> Color {
Color::from_fg(word >> 4)
}
fn to_fg(&self) -> WORD {
match *self {
Color::Black => 0,
Color::Blue => FG_BLUE,
Color::Green => FG_GREEN,
Color::Red => FG_RED,
Color::Cyan => FG_CYAN,
Color::Magenta => FG_MAGENTA,
Color::Yellow => FG_YELLOW,
Color::White => FG_WHITE,
}
}
fn from_fg(word: WORD) -> Color {
match word & 0b111 {
FG_BLUE => Color::Blue,
FG_GREEN => Color::Green,
FG_RED => Color::Red,
FG_CYAN => Color::Cyan,
FG_MAGENTA => Color::Magenta,
FG_YELLOW => Color::Yellow,
FG_WHITE => Color::White,
_ => Color::Black,
}
}
}