Compare commits

..

20 Commits

Author SHA1 Message Date
Andrew Gallant
7eaaa04c69 ripgrep: small cleanups 2018-08-20 17:34:45 -04:00
Andrew Gallant
87a627631c doc: add section on PCRE2 performance 2018-08-20 17:34:45 -04:00
Andrew Gallant
9df60e164e deps: update other dependencies to latest 2018-08-20 17:34:45 -04:00
Andrew Gallant
afa06c518a deps: update libripgrep crate versions
This prepares them for an initial 0.1.0 release.
2018-08-20 17:34:45 -04:00
Andy Freeland
e46aeb34f8 ignore/types: add .mako and .mao for Mako templates
I've personally never seen `.mao`, but GitHub includes it in Linguist: 
4f11062304/lib/linguist/languages.yml (L2702-L2709)
2018-08-20 15:26:49 -04:00
dana
d8f187e990 complete: add completion reference guide 2018-08-20 11:53:19 -04:00
dana
7d93d2ab05 ripgrep: add --no-multiline-dotall 2018-08-20 07:50:00 -04:00
dana
9ca2d68e94 ripgrep: fix typos in option descriptions 2018-08-20 07:50:00 -04:00
dana
60b0e3ff80 complete: update wording, exclusion, &c. 2018-08-20 07:50:00 -04:00
dana
3a1c081c13 test_complete: match certain long options in description bodies 2018-08-20 07:50:00 -04:00
Andrew Gallant
d5c0b03030 changelog: massive update for libripgrep
This commit updates the CHANGELOG to reflect all the work done to make
libripgrep a reality.

* Closes #162 (libripgrep)
* Closes #176 (multiline search)
* Closes #188 (opt-in PCRE2 support)
* Closes #244 (JSON output)
* Closes #416 (Windows CRLF support)
* Closes #917 (trim prefix whitespace)
* Closes #993 (add --null-data flag)
* Closes #997 (--passthru works with --replace)

* Fixes #2 (memory maps and context handling work)
* Fixes #200 (ripgrep stops when pipe is closed)
* Fixes #389 (more intuitive `-w/--word-regexp`)
* Fixes #643 (detection of stdin on Windows is better)
* Fixes #441, Fixes #690, Fixes #980 (empty matching lines are weird)
* Fixes #764 (coalesce color escapes)
* Fixes #922 (memory maps failing is no big deal)
* Fixes #937 (color escapes no longer used for empty matches)
* Fixes #940 (--passthru does not impact exit status)
* Fixes #1013 (show runtime CPU features in --version output)
2018-08-20 07:10:19 -04:00
Andrew Gallant
eb184d7711 tests: re-tool integration tests
This basically rewrites every integration test. We reduce the amount of
magic involved here in terms of which arguments are being passed to
ripgrep processes. To make up for the boiler plate saved by the magic,
we make the Dir (formerly WorkDir) type a bit nicer to use, along with a
new TestCommand that wraps a std::process::Command. In exchange, we get
tests that are easier to read and write.

We also run every test with the `--pcre2` flag to make sure that works,
when PCRE2 is available.
2018-08-20 07:10:19 -04:00
Andrew Gallant
bb110c1ebe ripgrep: migrate to libripgrep
This commit does the work to delete the old `grep` crate and effectively
rewrite most of ripgrep core to use the new libripgrep crates. The new
`grep` crate is now a facade that collects the various crates that make
up libripgrep.

The most complex part of ripgrep core is now arguably the translation
between command line parameters and the library options, which is
ultimately where we want to be.
2018-08-20 07:10:19 -04:00
Andrew Gallant
d9ca529356 libripgrep: initial commit introducing libripgrep
libripgrep is not any one library, but rather, a collection of libraries
that roughly separate the following key distinct phases in a grep
implementation:

  1. Pattern matching (e.g., by a regex engine).
  2. Searching a file using a pattern matcher.
  3. Printing results.

Ultimately, both (1) and (3) are defined by de-coupled interfaces, of
which there may be multiple implementations. Namely, (1) is satisfied by
the `Matcher` trait in the `grep-matcher` crate and (3) is satisfied by
the `Sink` trait in the `grep2` crate. The searcher (2) ties everything
together and finds results using a matcher and reports those results
using a `Sink` implementation.

Closes #162
2018-08-20 07:10:19 -04:00
Sylvestre Ledru
0958837ee1 readme: ripgrep is available in Debian Buster
PR #1016
2018-08-17 06:35:43 -04:00
Andrew Gallant
94be3bd4bb grep: remove senseless test
It was pulling in a sizable data file and doesn't appear to be testing
anything meaningful that isn't covered by a variety of other tests.
2018-08-15 19:52:50 -04:00
woky
deb1de6e1e ignore/types: add *.sbt to scala type
Sbt is currently most used Scala build tool which uses
*.sbt files, which are basically Scala.

PR #1010
2018-08-14 06:29:27 -07:00
Vanessa McHale
6afdf15d85 ignore/types: add Idris, Dhall and ATS
And also improve Haskell detection.

PR #1007
2018-08-07 13:10:19 -04:00
Jonatan Hamberg
6cda7b24e9 readme: update debian link to 0.9.0
PR #1006
2018-08-07 07:50:08 -04:00
llogiq
ad9befbc1d deps: update bytecount to 0.3.2
PR #1003
2018-08-06 06:44:16 -04:00
66 changed files with 5607 additions and 2848 deletions

View File

@@ -17,6 +17,8 @@ addons:
# Needed for testing decompression search.
- xz-utils
- liblz4-tool
# For building MUSL static builds on Linux.
- musl-tools
matrix:
fast_finish: true
include:
@@ -99,7 +101,6 @@ branches:
only:
# Pushes and PR to the master branch
- master
- ag/libripgrep
# Ruby regex to match tags. Required, or travis won't trigger deploys when
# a new tag is pushed.
- /^\d+\.\d+\.\d+.*$/

View File

@@ -1,3 +1,68 @@
0.10.0 (TBD)
============
This is a new minor version release of ripgrep that contains some major new
features, a huge number of bug fixes, and is the first release based on
libripgrep. The entirety of ripgrep's core search and printing code has been
rewritten and generalized so that anyone can make use of it.
Major new features include PCRE2 support, multi-line search and a JSON output
format.
**BREAKING CHANGES**:
* The match semantics of `-w/--word-regexp` have changed slightly. They used
to be `\b(?:<your pattern>)\b`, but now it's
`(?:^|\W)(?:<your pattern>)(?:$|\W)`.
See [#389](https://github.com/BurntSushi/ripgrep/issues/389) for more
details.
Feature enhancements:
* [FEATURE #162](https://github.com/BurntSushi/ripgrep/issues/162):
libripgrep is now a thing, composed of the following crates:
`grep`, `grep-matcher`, `grep-pcre2`, `grep-printer`, `grep-regex` and
`grep-searcher`.
* [FEATURE #176](https://github.com/BurntSushi/ripgrep/issues/176):
Add `-U/--multiline` flag that permits matching over multiple lines.
* [FEATURE #188](https://github.com/BurntSushi/ripgrep/issues/188):
Add `-P/--pcre2` flag that gives support for look-around and backreferences.
* [FEATURE #244](https://github.com/BurntSushi/ripgrep/issues/244):
Add `--json` flag that prints results in a JSON Lines format.
* [FEATURE #416](https://github.com/BurntSushi/ripgrep/issues/416):
Add `--crlf` flag to permit `$` to work with carriage returns on Windows.
* [FEATURE #917](https://github.com/BurntSushi/ripgrep/issues/917):
The `--trim` flag strips prefix whitespace from all lines printed.
* [FEATURE #993](https://github.com/BurntSushi/ripgrep/issues/993):
Add `--null-data` flag, which makes ripgrep use NUL as a line terminator.
* [FEATURE #997](https://github.com/BurntSushi/ripgrep/issues/997):
The `--passthru` flag now works with the `--replace` flag.
Bug fixes:
* [BUG #2](https://github.com/BurntSushi/ripgrep/issues/2):
Searching with non-zero context can now use memory maps if appropriate.
* [BUG #200](https://github.com/BurntSushi/ripgrep/issues/200):
ripgrep will now stop correctly when its output pipe is closed.
* [BUG #389](https://github.com/BurntSushi/ripgrep/issues/389):
The `-w/--word-regexp` flag now works more intuitively.
* [BUG #643](https://github.com/BurntSushi/ripgrep/issues/643):
Detection of readable stdin has improved on Windows.
* [BUG #441](https://github.com/BurntSushi/ripgrep/issues/441),
[BUG #690](https://github.com/BurntSushi/ripgrep/issues/690),
[BUG #980](https://github.com/BurntSushi/ripgrep/issues/980):
Matching empty lines now works correctly in several corner cases.
* [BUG #764](https://github.com/BurntSushi/ripgrep/issues/764):
Color escape sequences now coalesce, which reduces output size.
* [BUG #922](https://github.com/BurntSushi/ripgrep/issues/922):
ripgrep is now more robust with respect to memory maps failing.
* [BUG #937](https://github.com/BurntSushi/ripgrep/issues/937):
Color escape sequences are no longer emitted for empty matches.
* [BUG #940](https://github.com/BurntSushi/ripgrep/issues/940):
Context from the `--passthru` flag should not impact process exit status.
* [BUG #1013](https://github.com/BurntSushi/ripgrep/issues/1013):
Add compile time and runtime CPU features to `--version` output.
0.9.0 (2018-08-03)
==================
This is a new minor version release of ripgrep that contains some minor new

253
Cargo.lock generated
View File

@@ -19,7 +19,7 @@ name = "atty"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -29,7 +29,7 @@ name = "base64"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.2.3 (registry+https://github.com/rust-lang/crates.io-index)",
"byteorder 1.2.4 (registry+https://github.com/rust-lang/crates.io-index)",
"safemem 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -40,7 +40,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "bytecount"
version = "0.3.1"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"simd 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -48,12 +48,17 @@ dependencies = [
[[package]]
name = "byteorder"
version = "1.2.3"
version = "1.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cc"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cfg-if"
version = "0.1.4"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@@ -74,26 +79,21 @@ name = "crossbeam"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "dtoa"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "encoding_rs"
version = "0.8.4"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"simd 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "encoding_rs_io"
version = "0.1.1"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"encoding_rs 0.8.4 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs 0.8.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -127,7 +127,7 @@ dependencies = [
"aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"fnv 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)",
"glob 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.4 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -136,57 +136,68 @@ dependencies = [
name = "grep"
version = "0.2.0"
dependencies = [
"grep-matcher 0.0.1",
"grep-printer 0.0.1",
"grep-regex 0.0.1",
"grep-searcher 0.0.1",
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"grep-matcher 0.1.0",
"grep-pcre2 0.1.0",
"grep-printer 0.1.0",
"grep-regex 0.1.0",
"grep-searcher 0.1.0",
"termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"walkdir 2.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "grep-matcher"
version = "0.0.1"
version = "0.1.0"
dependencies = [
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "grep-pcre2"
version = "0.1.0"
dependencies = [
"grep-matcher 0.1.0",
"pcre2 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "grep-printer"
version = "0.0.1"
version = "0.1.0"
dependencies = [
"base64 0.9.2 (registry+https://github.com/rust-lang/crates.io-index)",
"grep-matcher 0.0.1",
"grep-regex 0.0.1",
"grep-searcher 0.0.1",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.70 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.70 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.24 (registry+https://github.com/rust-lang/crates.io-index)",
"grep-matcher 0.1.0",
"grep-regex 0.1.0",
"grep-searcher 0.1.0",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "grep-regex"
version = "0.0.1"
version = "0.1.0"
dependencies = [
"grep-matcher 0.0.1",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"grep-matcher 0.1.0",
"log 0.4.4 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "grep-searcher"
version = "0.0.1"
version = "0.1.0"
dependencies = [
"bytecount 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs 0.8.4 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs_io 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"grep-matcher 0.0.1",
"grep-regex 0.0.1",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"bytecount 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs 0.8.6 (registry+https://github.com/rust-lang/crates.io-index)",
"encoding_rs_io 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"grep-matcher 0.1.0",
"grep-regex 0.1.0",
"log 0.4.4 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"memmap 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -198,14 +209,14 @@ version = "0.4.3"
dependencies = [
"crossbeam 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"globset 0.4.1",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.4 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"tempdir 0.3.7 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"walkdir 2.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
"walkdir 2.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -216,20 +227,23 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "lazy_static"
version = "1.0.2"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"version_check 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "libc"
version = "0.2.42"
version = "0.2.43"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.4.3"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -237,7 +251,7 @@ name = "memchr"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -245,7 +259,7 @@ name = "memmap"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -254,12 +268,39 @@ name = "num_cpus"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "pcre2"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cc 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"pcre2-sys 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"pkg-config 0.3.13 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "pcre2-sys"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cc 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"pkg-config 0.3.13 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "pkg-config"
version = "0.3.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "proc-macro2"
version = "0.4.9"
version = "0.4.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -267,19 +308,19 @@ dependencies = [
[[package]]
name = "quote"
version = "0.6.3"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.9 (registry+https://github.com/rust-lang/crates.io-index)",
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand"
version = "0.4.2"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -304,7 +345,7 @@ dependencies = [
"aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -333,15 +374,23 @@ dependencies = [
"globset 0.4.1",
"grep 0.2.0",
"ignore 0.4.3",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.4 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ryu"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "safemem"
version = "0.2.0"
@@ -357,27 +406,27 @@ dependencies = [
[[package]]
name = "serde"
version = "1.0.70"
version = "1.0.71"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde_derive"
version = "1.0.70"
version = "1.0.71"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.9 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.4 (registry+https://github.com/rust-lang/crates.io-index)",
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serde_json"
version = "1.0.24"
version = "1.0.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"dtoa 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"itoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.70 (registry+https://github.com/rust-lang/crates.io-index)",
"ryu 0.2.4 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -392,11 +441,11 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "syn"
version = "0.14.4"
version = "0.14.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.9 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -405,7 +454,7 @@ name = "tempdir"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"remove_dir_all 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -422,7 +471,7 @@ name = "termion"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -437,11 +486,10 @@ dependencies = [
[[package]]
name = "thread_local"
version = "0.3.5"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@@ -459,27 +507,19 @@ name = "unicode-xid"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unreachable"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "utf8-ranges"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "void"
version = "1.0.2"
name = "version_check"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "walkdir"
version = "2.1.4"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -519,53 +559,56 @@ dependencies = [
"checksum atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "9a7d5b8723950951411ee34d271d99dddcc2035a16ab25310ea2c8cfd4369652"
"checksum base64 0.9.2 (registry+https://github.com/rust-lang/crates.io-index)" = "85415d2594767338a74a30c1d370b2f3262ec1b4ed2d7bba5b3faf4de40467d9"
"checksum bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)" = "d0c54bb8f454c567f21197eefcdbf5679d0bd99f2ddbe52e84c77061952e6789"
"checksum bytecount 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "882585cd7ec84e902472df34a5e01891202db3bf62614e1f0afe459c1afcf744"
"checksum byteorder 1.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "74c0b906e9446b0a2e4f760cdb3fa4b2c48cdc6db8766a845c54b6ff063fd2e9"
"checksum cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "efe5c877e17a9c717a0bf3613b2709f723202c4e4675cc8f12926ded29bcb17e"
"checksum bytecount 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "f861d9ce359f56dbcb6e0c2a1cb84e52ad732cadb57b806adeb3c7668caccbd8"
"checksum byteorder 1.2.4 (registry+https://github.com/rust-lang/crates.io-index)" = "8389c509ec62b9fe8eca58c502a0acaf017737355615243496cde4994f8fa4f9"
"checksum cc 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)" = "2119ea4867bd2b8ed3aecab467709720b2d55b1bcfe09f772fd68066eaf15275"
"checksum cfg-if 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "0c4e7bb64a8ebb0d856483e1e682ea3422f883c5f5615a90d51a2c82fe87fdd3"
"checksum clap 2.32.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b957d88f4b6a63b9d70d5f454ac8011819c6efa7727858f458ab71c756ce2d3e"
"checksum crossbeam 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "24ce9782d4d5c53674646a6a4c1863a21a8fc0cb649b3c94dfc16e45071dea19"
"checksum dtoa 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "6d301140eb411af13d3115f9a562c85cc6b541ade9dfa314132244aaee7489dd"
"checksum encoding_rs 0.8.4 (registry+https://github.com/rust-lang/crates.io-index)" = "88a1b66a0d28af4b03a8c8278c6dcb90e6e600d89c14500a9e7a02e64b9ee3ac"
"checksum encoding_rs_io 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "ad0ffe753ba194ef1bc070e8d61edaadb1536c05e364fc9178ca6cbde10922c4"
"checksum encoding_rs 0.8.6 (registry+https://github.com/rust-lang/crates.io-index)" = "2a91912d6f37c6a8fef8a2316a862542d036f13c923ad518b5aca7bcaac7544c"
"checksum encoding_rs_io 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "f222ff554d6e172f3569a2d7d0fd8061d54215984ef67b24ce031c1fcbf2c9b3"
"checksum fnv 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)" = "2fad85553e09a6f881f739c29f0b00b0f01357c743266d478b68951ce23285f3"
"checksum fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "2e9763c69ebaae630ba35f74888db465e49e259ba1bc0eda7d06f4a067615d82"
"checksum fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3dcaa9ae7725d12cdb85b3ad99a434db70b468c09ded17e012d86b5c1010f7a7"
"checksum glob 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "8be18de09a56b60ed0edf84bc9df007e30040691af7acd1c41874faac5895bfb"
"checksum itoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "5adb58558dcd1d786b5f0bd15f3226ee23486e24b7b58304b60f64dc68e62606"
"checksum lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "fb497c35d362b6a331cfd94956a07fc2c78a4604cdbee844a81170386b996dd3"
"checksum libc 0.2.42 (registry+https://github.com/rust-lang/crates.io-index)" = "b685088df2b950fccadf07a7187c8ef846a959c142338a48f9dc0b94517eb5f1"
"checksum log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "61bd98ae7f7b754bc53dca7d44b604f733c6bba044ea6f41bc8d89272d8161d2"
"checksum lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ca488b89a5657b0a2ecd45b95609b3e848cf1755da332a0da46e2b2b1cb371a7"
"checksum libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)" = "76e3a3ef172f1a0b9a9ff0dd1491ae5e6c948b94479a3021819ba7d860c8645d"
"checksum log 0.4.4 (registry+https://github.com/rust-lang/crates.io-index)" = "cba860f648db8e6f269df990180c2217f333472b4a6e901e97446858487971e2"
"checksum memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "796fba70e76612589ed2ce7f45282f5af869e0fdd7cc6199fa1aa1f1d591ba9d"
"checksum memmap 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)" = "e2ffa2c986de11a9df78620c01eeaaf27d94d3ff02bf81bfcca953102dd0c6ff"
"checksum num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c51a3322e4bca9d212ad9a158a02abc6934d005490c054a2778df73a70aa0a30"
"checksum proc-macro2 0.4.9 (registry+https://github.com/rust-lang/crates.io-index)" = "cccdc7557a98fe98453030f077df7f3a042052fae465bb61d2c2c41435cfd9b6"
"checksum quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)" = "e44651a0dc4cdd99f71c83b561e221f714912d11af1a4dff0631f923d53af035"
"checksum rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "eba5f8cb59cc50ed56be8880a5c7b496bfd9bd26394e176bc67884094145c2c5"
"checksum pcre2 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "0c16ec0e30c17f938a2da8ff970ad9a4100166d0538898dcc035b55c393cab54"
"checksum pcre2-sys 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "a9027f9474e4e13d3b965538aafcaebe48c803488ad76b3c97ef061a8324695f"
"checksum pkg-config 0.3.13 (registry+https://github.com/rust-lang/crates.io-index)" = "104630aa1c83213cbc76db0703630fcb0421dac3585063be4ce9a8a2feeaa745"
"checksum proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)" = "ee5697238f0d893c7f0ecc59c0999f18d2af85e424de441178bcacc9f9e6cf67"
"checksum quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)" = "ed7d650913520df631972f21e104a4fa2f9c82a14afc65d17b388a2e29731e7c"
"checksum rand 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "8356f47b32624fef5b3301c1be97e5944ecdd595409cc5da11d05f211db6cfbd"
"checksum redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "c214e91d3ecf43e9a4e41e578973adeb14b474f2bee858742d127af75a0112b1"
"checksum redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7e891cfe48e9100a70a3b6eb652fef28920c117d366339687bd5576160db0f76"
"checksum regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "5bbbea44c5490a1e84357ff28b7d518b4619a159fed5d25f6c1de2d19cc42814"
"checksum regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)" = "747ba3b235651f6e2f67dfa8bcdcd073ddb7c243cb21c442fc12395dfcac212d"
"checksum remove_dir_all 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "3488ba1b9a2084d38645c4c08276a1752dcbf2c7130d74f1569681ad5d2799c5"
"checksum ryu 0.2.4 (registry+https://github.com/rust-lang/crates.io-index)" = "fd0568787116e13c652377b6846f5931454a363a8fdf8ae50463ee40935b278b"
"checksum safemem 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "e27a8b19b835f7aea908818e871f5cc3a5a186550c30773be987e155e8163d8f"
"checksum same-file 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "cfb6eded0b06a0b512c8ddbcf04089138c9b4362c2f696f3c3d76039d68f3637"
"checksum serde 1.0.70 (registry+https://github.com/rust-lang/crates.io-index)" = "0c3adf19c07af6d186d91dae8927b83b0553d07ca56cbf7f2f32560455c91920"
"checksum serde_derive 1.0.70 (registry+https://github.com/rust-lang/crates.io-index)" = "3525a779832b08693031b8ecfb0de81cd71cfd3812088fafe9a7496789572124"
"checksum serde_json 1.0.24 (registry+https://github.com/rust-lang/crates.io-index)" = "c3c6908c7b925cd6c590358a4034de93dbddb20c45e1d021931459fd419bf0e2"
"checksum serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)" = "6dfad05c8854584e5f72fb859385ecdfa03af69c3fd0572f0da2d4c95f060bdb"
"checksum serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)" = "b719c6d5e9f73fbc37892246d5852333f040caa617b8873c6aced84bcb28e7bb"
"checksum serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)" = "44dd2cfde475037451fa99b7e5df77aa3cfd1536575fa8e7a538ab36dcde49ae"
"checksum simd 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "ed3686dd9418ebcc3a26a0c0ae56deab0681e53fe899af91f5bbcee667ebffb1"
"checksum strsim 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "bb4f380125926a99e52bc279241539c018323fab05ad6368b56f93d9369ff550"
"checksum syn 0.14.4 (registry+https://github.com/rust-lang/crates.io-index)" = "2beff8ebc3658f07512a413866875adddd20f4fd47b2a4e6c9da65cd281baaea"
"checksum syn 0.14.8 (registry+https://github.com/rust-lang/crates.io-index)" = "b7bfcbb0c068d0f642a0ffbd5c604965a360a61f99e8add013cef23a838614f3"
"checksum tempdir 0.3.7 (registry+https://github.com/rust-lang/crates.io-index)" = "15f2b5fb00ccdf689e0149d1b1b3c03fead81c2b37735d812fa8bddbbf41b6d8"
"checksum termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "722426c4a0539da2c4ffd9b419d90ad540b4cff4a053be9069c908d4d07e2836"
"checksum termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "689a3bdfaab439fd92bc87df5c4c78417d3cbe537487274e9b0b2dce76e92096"
"checksum textwrap 0.10.0 (registry+https://github.com/rust-lang/crates.io-index)" = "307686869c93e71f94da64286f9a9524c0f308a9e1c87a583de8e9c9039ad3f6"
"checksum thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "279ef31c19ededf577bfd12dfae728040a21f635b06a24cd670ff510edd38963"
"checksum thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c6b53e329000edc2b34dbe8545fd20e55a333362d0a321909685a19bd28c3f1b"
"checksum ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "fd2be2d6639d0f8fe6cdda291ad456e23629558d466e2789d2c3e9892bda285d"
"checksum unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "882386231c45df4700b275c7ff55b6f3698780a650026380e72dabe76fa46526"
"checksum unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "fc72304796d0818e357ead4e000d19c9c174ab23dc11093ac919054d20a6a7fc"
"checksum unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "382810877fe448991dfc7f0dd6e3ae5d58088fd0ea5e35189655f84e6814fa56"
"checksum utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "662fab6525a98beff2921d7f61a39e7d59e0b425ebc7d0d9e66d316e55124122"
"checksum void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6a02e4885ed3bc0f2de90ea6dd45ebcbb66dacffe03547fadbb0eeae2770887d"
"checksum walkdir 2.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "63636bd0eb3d00ccb8b9036381b526efac53caf112b7783b730ab3f8e44da369"
"checksum version_check 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "7716c242968ee87e5542f8021178248f267f295a5c4803beae8b8b7fd9bc6051"
"checksum walkdir 2.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f1b768ba943161a9226ccd59b26bcd901e5d60e6061f4fcad3034784e0c7372b"
"checksum winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "773ef9dcc5f24b7d850d0ff101e542ff24c3b090a9768e03ff889fdef41f00fd"
"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"

View File

@@ -33,8 +33,14 @@ path = "tests/tests.rs"
[workspace]
members = [
"grep", "globset", "ignore",
"grep-matcher", "grep-printer", "grep-regex", "grep-searcher",
"globset",
"grep",
"grep-matcher",
"grep-pcre2",
"grep-printer",
"grep-regex",
"grep-searcher",
"ignore",
]
[dependencies]
@@ -47,6 +53,7 @@ log = "0.4"
num_cpus = "1"
regex = "1"
same-file = "1"
serde_json = "1"
termcolor = "1"
[dependencies.clap]
@@ -56,7 +63,7 @@ features = ["suggestions", "color"]
[target.'cfg(windows)'.dependencies.winapi]
version = "0.3"
features = ["std", "winnt"]
features = ["std", "fileapi", "winnt"]
[build-dependencies]
lazy_static = "1"
@@ -66,9 +73,14 @@ version = "2.29.4"
default-features = false
features = ["suggestions", "color"]
[dev-dependencies]
serde = "1"
serde_derive = "1"
[features]
avx-accel = ["grep/avx-accel"]
simd-accel = ["grep/simd-accel"]
pcre2 = ["grep/pcre2"]
[profile.release]
debug = true
debug = 1

328
FAQ.md
View File

@@ -16,6 +16,7 @@
* [How do I get around the regex size limit?](#size-limit)
* [How do I make the `-f/--file` flag faster?](#dfa-size)
* [How do I make the output look like The Silver Searcher's output?](#silver-searcher-output)
* [Why does ripgrep get slower when I enabled PCRE2 regexes?](#pcre2-slow)
* [When I run `rg`, why does it execute some other command?](#rg-other-cmd)
* [How do I create an alias for ripgrep on Windows?](#rg-alias-windows)
* [How do I create a PowerShell profile?](#powershell-profile)
@@ -157,13 +158,37 @@ tool. With that said,
How do I use lookaround and/or backreferences?
</h3>
This isn't currently possible. ripgrep uses finite automata to implement
regular expression search, and in turn, guarantees linear time searching on all
inputs. It is difficult to efficiently support lookaround and backreferences in
finite automata engines, so ripgrep does not provide these features.
ripgrep's default regex engine does not support lookaround or backreferences.
This is primarily because the default regex engine is implemented using finite
state machines in order to guarantee a linear worst case time complexity on all
inputs. Backreferences are not possible to implement in this paradigm, and
lookaround appears difficult to do efficiently.
If a production quality regular expression engine with these features is ever
written in Rust, then it is possible ripgrep will provide it as an opt-in
However, ripgrep optionally supports using PCRE2 as the regex engine instead of
the default one based on finite state machines. You can enable PCRE2 with the
`-P/--pcre2` flag. For example, in the root of the ripgrep repo, you can easily
find all palindromes:
```
$ rg -P '(\w{10})\1'
tests/misc.rs
483: cmd.arg("--max-filesize").arg("44444444444444444444");
globset/src/glob.rs
1206: matches!(match7, "a*a*a*a*a*a*a*a*a", "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa");
```
If your version of ripgrep doesn't support PCRE2, then you'll get an error
message when you try to use the `-P/--pcre2` flag:
```
$ rg -P '(\w{10})\1'
PCRE2 is not available in this build of ripgrep
```
Most of the releases distributed by the ripgrep project here on GitHub will
come bundled with PCRE2 enabled. If you installed ripgrep through a different
means (like your system's package manager), then please reach out to the
maintainer of that package to see whether it's possible to enable the PCRE2
feature.
@@ -368,6 +393,297 @@ $ RIPGREP_CONFIG_PATH=$HOME/.config/ripgrep/rc rg foo
```
<h3 name="pcre2-slow">
Why does ripgrep get slower when I enable PCRE2 regexes?
</h3>
When you use the `--pcre2` (`-P` for short) flag, ripgrep will use the PCRE2
regex engine instead of the default. Both regex engines are quite fast,
but PCRE2 provides a number of additional features such as look-around and
backreferences that many enjoy using. This is largely because PCRE2 uses
a backtracking implementation where as the default regex engine uses a finite
automaton based implementation. The former provides the ability to add lots of
bells and whistles over the latter, but the latter executes with worst case
linear time complexity.
With that out of the way, if you've used `-P` with ripgrep, you may have
noticed that it can be slower. The reasons for why this is are quite complex,
and they are complex because the optimizations that ripgrep uses to implement
fast search are complex.
The task ripgrep has before it is somewhat simple; all it needs to do is search
a file for occurrences of some pattern and then print the lines containing
those occurrences. The problem lies in what is considered a valid match and how
exactly we read the bytes from a file.
In terms of what is considered a valid match, remember that ripgrep will only
report matches spanning a single line by default. The problem here is that
some patterns can match across multiple lines, and ripgrep needs to prevent
that from happening. For example, `foo\sbar` will match `foo\nbar`. The most
obvious way to achieve this is to read the data from a file, and then apply
the pattern search to that data for each line. The problem with this approach
is that it can be quite slow; it would be much faster to let the pattern
search across as much data as possible. It's faster because it gets rid of the
overhead of finding the boundaries of every line, and also because it gets rid
of the overhead of starting and stopping the pattern search for every single
line. (This is operating under the general assumption that matching lines are
much rarer than non-matching lines.)
It turns out that we can use the faster approach by applying a very simple
restriction to the pattern: *statically prevent* the pattern from matching
through a `\n` character. Namely, when given a pattern like `foo\sbar`,
ripgrep will remove `\n` from the `\s` character class automatically. In some
cases, a simple removal is not so easy. For example, ripgrep will return an
error when your pattern includes a `\n` literal:
```
$ rg '\n'
the literal '"\n"' is not allowed in a regex
```
So what does this have to do with PCRE2? Well, ripgrep's default regex engine
exposes APIs for doing syntactic analysis on the pattern in a way that makes
it quite easy to strip `\n` from the pattern (or otherwise detect it and report
an error if stripping isn't possible). PCRE2 seemingly does not provide a
similar API, so ripgrep does not do any stripping when PCRE2 is enabled. This
forces ripgrep to use the "slow" search strategy of searching each line
individually.
OK, so if enabling PCRE2 slows down the default method of searching because it
forces matches to be limited to a single line, then why is PCRE2 also sometimes
slower when performing multiline searches? Well, that's because there are
*multiple* reasons why using PCRE2 in ripgrep can be slower than the default
regex engine. This time, blame PCRE2's Unicode support, which ripgrep enables
by default. In particular, PCRE2 cannot simultaneously enable Unicode support
and search arbitrary data. That is, when PCRE2's Unicode support is enabled,
the data **must** be valid UTF-8 (to do otherwise is to invoke undefined
behavior). This is in contrast to ripgrep's default regex engine, which can
enable Unicode support and still search arbitrary data. ripgrep's default
regex engine simply won't match invalid UTF-8 for a pattern that can otherwise
only match valid UTF-8. Why doesn't PCRE2 do the same? This author isn't
familiar with its internals, so we can't comment on it here.
The bottom line here is that we can't enable PCRE2's Unicode support without
simultaneously incurring a performance penalty for ensuring that we are
searching valid UTF-8. In particular, ripgrep will transcode the contents
of each file to UTF-8 while replacing invalid UTF-8 data with the Unicode
replacement codepoint. ripgrep then disables PCRE2's own internal UTF-8
checking, since we've guaranteed the data we hand it will be valid UTF-8. The
reason why ripgrep takes this approach is because if we do hand PCRE2 invalid
UTF-8, then it will report a match error if it comes across an invalid UTF-8
sequence. This is not good news for ripgrep, since it will stop it from
searching the rest of the file, and will also print potentially undesirable
error messages to users.
All right, the above is a lot of information to swallow if you aren't already
familiar with ripgrep internals. Let's make this concrete with some examples.
First, let's get some data big enough to magnify the performance differences:
```
$ curl -O 'https://burntsushi.net/stuff/subtitles2016-sample.gz'
$ gzip -d subtitles2016-sample
$ md5sum subtitles2016-sample
e3cb796a20bbc602fbfd6bb43bda45f5 subtitles2016-sample
```
To search this data, we will use the pattern `^\w{42}$`, which contains exactly
one hit in the file and has no literals. Having no literals is important,
because it ensures that the regex engine won't use literal optimizations to
speed up the search. In other words, it lets us reason coherently about the
actual task that the regex engine is performing.
Let's now walk through a few examples in light of the information above. First,
let's consider the default search using ripgrep's default regex engine and
then the same search with PCRE2:
```
$ time rg '^\w{42}$' subtitles2016-sample
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m1.783s
user 0m1.731s
sys 0m0.051s
$ time rg -P '^\w{42}$' subtitles2016-sample
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m2.458s
user 0m2.419s
sys 0m0.038s
```
In this particular example, both pattern searches are using a Unicode aware
`\w` character class and both are counting lines in order to report line
numbers. The key difference here is that the first search will not search
line by line, but the second one will. We can observe which strategy ripgrep
uses by passing the `--trace` flag:
```
$ rg '^\w{42}$' subtitles2016-sample --trace
[... snip ...]
TRACE|grep_searcher::searcher|grep-searcher/src/searcher/mod.rs:622: Some("subtitles2016-sample"): searching via memory map
TRACE|grep_searcher::searcher|grep-searcher/src/searcher/mod.rs:712: slice reader: searching via slice-by-line strategy
TRACE|grep_searcher::searcher::core|grep-searcher/src/searcher/core.rs:61: searcher core: will use fast line searcher
[... snip ...]
$ rg -P '^\w{42}$' subtitles2016-sample --trace
[... snip ...]
TRACE|grep_searcher::searcher|grep-searcher/src/searcher/mod.rs:622: Some("subtitles2016-sample"): searching via memory map
TRACE|grep_searcher::searcher|grep-searcher/src/searcher/mod.rs:705: slice reader: needs transcoding, using generic reader
TRACE|grep_searcher::searcher|grep-searcher/src/searcher/mod.rs:685: generic reader: searching via roll buffer strategy
TRACE|grep_searcher::searcher::core|grep-searcher/src/searcher/core.rs:63: searcher core: will use slow line searcher
[... snip ...]
```
The first says it is using the "fast line searcher" where as the latter says
it is using the "slow line searcher." The latter also shows that we are
decoding the contents of the file, which also impacts performance.
Interestingly, in this case, the pattern does not match a `\n` and the file
we're searching is valid UTF-8, so neither the slow line-by-line search
strategy nor the decoding are necessary. We could fix the former issue with
better PCRE2 introspection APIs. We can actually fix the latter issue with
ripgrep's `--no-encoding` flag, which prevents the automatic UTF-8 decoding,
but will enable PCRE2's own UTF-8 validity checking. Unfortunately, it's slower
in my build of ripgrep:
```
$ time rg -P '^\w{42}$' subtitles2016-sample --no-encoding
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m3.074s
user 0m3.021s
sys 0m0.051s
```
(Tip: use the `--trace` flag to verify that no decoding in ripgrep is
happening.)
A possible reason why PCRE2's UTF-8 checking is slower is because it might
not be better than the highly optimized UTF-8 checking routines found in the
[`encoding_rs`](https://github.com/hsivonen/encoding_rs) library, which is what
ripgrep uses for UTF-8 decoding. Moreover, my build of ripgrep enables
`encoding_rs`'s SIMD optimizations, which may be in play here.
Also, note that using the `--no-encoding` flag can cause PCRE2 to report
invalid UTF-8 errors, which causes ripgrep to stop searching the file:
```
$ cat invalid-utf8
foobar
$ xxd invalid-utf8
00000000: 666f 6fff 6261 720a foo.bar.
$ rg foo invalid-utf8
1:foobar
$ rg -P foo invalid-utf8
1:foo<6F>bar
$ rg -P foo invalid-utf8 --no-encoding
invalid-utf8: PCRE2: error matching: UTF-8 error: illegal byte (0xfe or 0xff)
```
All right, so at this point, you might think that we could remove the penalty
for line-by-line searching by enabling multiline search. After all, our
particular pattern can't match across multiple lines anyway, so we'll still get
the results we want. Let's try it:
```
$ time rg -U '^\w{42}$' subtitles2016-sample
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m1.803s
user 0m1.748s
sys 0m0.054s
$ time rg -P -U '^\w{42}$' subtitles2016-sample
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m2.962s
user 0m2.246s
sys 0m0.713s
```
Search times remain the same with the default regex engine, but the PCRE2
search gets _slower_. What happened? The secrets can be revealed with the
`--trace` flag once again. In the former case, ripgrep actually detects that
the pattern can't match across multiple lines, and so will fall back to the
"fast line search" strategy as with our search without `-U`.
However, for PCRE2, things are much worse. Namely, since Unicode mode is still
enabled, ripgrep is still going to decode UTF-8 to ensure that it hands only
valid UTF-8 to PCRE2. Unfortunately, one key downside of multiline search is
that ripgrep cannot do it incrementally. Since matches can be arbitrarily long,
ripgrep actually needs the entire file in memory at once. Normally, we can use
a memory map for this, but because we need to UTF-8 decode the file before
searching it, ripgrep winds up reading the entire contents of the file on to
the heap before executing a search. Owch.
OK, so Unicode is killing us here. The file we're searching is _mostly_ ASCII,
so maybe we're OK with missing some data. (Try `rg '[\w--\p{ascii}]'` to see
non-ASCII word characters that an ASCII-only `\w` character class would miss.)
We can disable Unicode in both searches, but this is done differently depending
on the regex engine we use:
```
$ time rg '(?-u)^\w{42}$' subtitles2016-sample
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m1.714s
user 0m1.669s
sys 0m0.044s
[andrew@Cheetah 2016] time rg -P '^\w{42}$' subtitles2016-sample --no-pcre2-unicode
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m1.997s
user 0m1.958s
sys 0m0.037s
```
For the most part, ripgrep's default regex engine performs about the same.
PCRE2 does improve a little bit, and is now almost as fast as the default
regex engine. If you look at the output of `--trace`, you'll see that ripgrep
will no longer perform UTF-8 decoding, but it does still use the slow
line-by-line searcher.
At this point, we can combine all of our insights above: let's try to get off
of the slow line-by-line searcher by enabling multiline mode, and let's stop
UTF-8 decoding by disabling Unicode support:
```
$ time rg -U '(?-u)^\w{42}$' subtitles2016-sample
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m1.714s
user 0m1.655s
sys 0m0.058s
$ time rg -P -U '^\w{42}$' subtitles2016-sample --no-pcre2-unicode
21225780:EverymajordevelopmentinthehistoryofAmerica
real 0m1.121s
user 0m1.071s
sys 0m0.048s
```
Ah, there's PCRE2's JIT shining! ripgrep's default regex engine once again
remains about the same, but PCRE2 no longer needs to search line-by-line and it
no longer needs to do any kind of UTF-8 checks. This allows the file to get
memory mapped and passed right through PCRE2's JIT at impressive speeds. (As
a brief and interesting historical note, the configuration of "memory map +
multiline + no-Unicode" is exactly the configuration used by The Silver
Searcher. This analysis perhaps sheds some reasoning as to why it converged on
that specific setting!)
In summary, if you want PCRE2 to go as fast as possible and you don't care
about Unicode and you don't care about matches possibly spanning across
multiple lines, then enable multiline mode with `-U` and disable PCRE2's
Unicode support with the `--no-pcre2-unicode` flag.
<h3 name="rg-other-cmd">
When I run <code>rg</code>, why does it execute some other command?
</h3>

128
README.md
View File

@@ -7,7 +7,7 @@ available for [every release](https://github.com/BurntSushi/ripgrep/releases).
ripgrep is similar to other popular search tools like The Silver Searcher,
ack and grep.
[![Linux build status](https://travis-ci.org/BurntSushi/ripgrep.svg?branch=master)](https://travis-ci.org/BurntSushi/ripgrep)
[![Linux build status](https://travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![Crates.io](https://img.shields.io/crates/v/ripgrep.svg)](https://crates.io/crates/ripgrep)
@@ -85,14 +85,16 @@ increases the times to `2.640s` for ripgrep and `10.277s` for GNU grep.
### Why should I use ripgrep?
* It can replace many use cases served by both The Silver Searcher and GNU grep
because it is generally faster than both. (See [the FAQ](FAQ.md#posix4ever)
for more details on whether ripgrep can truly replace grep.)
* Like The Silver Searcher, ripgrep defaults to recursive directory search
and won't search files ignored by your `.gitignore` files. It also ignores
hidden and binary files by default. ripgrep also implements full support
for `.gitignore`, whereas there are many bugs related to that functionality
in The Silver Searcher.
* It can replace many use cases served by other search tools
because it contains most of their features and is generally faster. (See
[the FAQ](FAQ.md#posix4ever) for more details on whether ripgrep can truly
replace grep.)
* Like other tools specialized to code search, ripgrep defaults to recursive
directory search and won't search files ignored by your `.gitignore` files.
It also ignores hidden and binary files by default. ripgrep also implements
full support for `.gitignore`, whereas there are many bugs related to that
functionality in other code search tools claiming to provide the same
functionality.
* ripgrep can search specific types of files. For example, `rg -tpy foo`
limits your search to Python files and `rg -Tjs foo` excludes Javascript
files from your search. ripgrep can be taught about new file types with
@@ -117,22 +119,24 @@ bugs, and Unicode support.
### Why shouldn't I use ripgrep?
I'd like to try to convince you why you *shouldn't* use ripgrep. This should
give you a glimpse at some important downsides or missing features of
ripgrep.
Despite initially not wanting to add every feature under the sun to ripgrep,
over time, ripgrep has grown support for most features found in other file
searching tools. This includes searching for results spanning across multiple
lines, and opt-in support for PCRE2, which provides look-around and
backreference support.
* ripgrep uses a regex engine based on finite automata, so if you want fancy
regex features such as backreferences or lookaround, ripgrep won't provide
them to you. ripgrep does support lots of things though, including, but not
limited to: lazy quantification (e.g., `a+?`), repetitions (e.g., `a{2,5}`),
begin/end assertions (e.g., `^\w+$`), word boundaries (e.g., `\bfoo\b`), and
support for Unicode categories (e.g., `\p{Sc}` to match currency symbols or
`\p{Lu}` to match any uppercase letter). (Fancier regexes will never be
supported.)
* ripgrep doesn't have multiline search. (Will happen as an opt-in feature.)
At this point, the primary reasons not to use ripgrep probably consist of one
or more of the following:
In other words, if you like fancy regexes or multiline search, then ripgrep
may not quite meet your needs (yet).
* You need a portable and ubiquitous tool. While ripgrep works on Windows,
macOS and Linux, it is not ubiquitous and it does not conform to any
standard such as POSIX. The best tool for this job is good old grep.
* There still exists some other minor feature (or bug) found in another tool
that isn't in ripgrep.
* There is a performance edge case where ripgrep doesn't do well where another
tool does do well. (Please file a bug report!)
* ripgrep isn't possible to install on your machine or isn't available for your
platform. (Please file a bug report!)
### Is it really faster than everything else?
@@ -145,7 +149,8 @@ Summarizing, ripgrep is fast because:
* It is built on top of
[Rust's regex engine](https://github.com/rust-lang-nursery/regex).
Rust's regex engine uses finite automata, SIMD and aggressive literal
optimizations to make searching very fast.
optimizations to make searching very fast. (PCRE2 support can be opted into
with the `-P/--pcre2` flag.)
* Rust's regex library maintains performance with full Unicode support by
building UTF-8 decoding directly into its deterministic finite automaton
engine.
@@ -168,6 +173,11 @@ Andy Lester, author of [ack](https://beyondgrep.com/), has published an
excellent table comparing the features of ack, ag, git-grep, GNU grep and
ripgrep: https://beyondgrep.com/feature-comparison/
Note that ripgrep has grown a few significant new features recently that
are not yet present in Andy's table. This includes, but is not limited to,
configuration files, passthru, support for searching compressed files,
multiline search and opt-in fancy regex support via PCRE2.
### Installation
@@ -207,13 +217,15 @@ If you're a **MacPorts** user, then you can install ripgrep from the
$ sudo port install ripgrep
```
If you're a **Windows Chocolatey** user, then you can install ripgrep from the [official repo](https://chocolatey.org/packages/ripgrep):
If you're a **Windows Chocolatey** user, then you can install ripgrep from the
[official repo](https://chocolatey.org/packages/ripgrep):
```
$ choco install ripgrep
```
If you're a **Windows Scoop** user, then you can install ripgrep from the [official bucket](https://github.com/lukesampson/scoop/blob/master/bucket/ripgrep.json):
If you're a **Windows Scoop** user, then you can install ripgrep from the
[official bucket](https://github.com/lukesampson/scoop/blob/master/bucket/ripgrep.json):
```
$ scoop install ripgrep
@@ -225,32 +237,37 @@ If you're an **Arch Linux** user, then you can install ripgrep from the official
$ pacman -S ripgrep
```
If you're a **Gentoo** user, you can install ripgrep from the [official repo](https://packages.gentoo.org/packages/sys-apps/ripgrep):
If you're a **Gentoo** user, you can install ripgrep from the
[official repo](https://packages.gentoo.org/packages/sys-apps/ripgrep):
```
$ emerge sys-apps/ripgrep
```
If you're a **Fedora 27+** user, you can install ripgrep from official repositories.
If you're a **Fedora 27+** user, you can install ripgrep from official
repositories.
```
$ sudo dnf install ripgrep
```
If you're a **Fedora 24+** user, you can install ripgrep from [copr](https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/):
If you're a **Fedora 24+** user, you can install ripgrep from
[copr](https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/):
```
$ sudo dnf copr enable carlwgeorge/ripgrep
$ sudo dnf install ripgrep
```
If you're an **openSUSE Tumbleweed** user, you can install ripgrep from the [official repo](http://software.opensuse.org/package/ripgrep):
If you're an **openSUSE Tumbleweed** user, you can install ripgrep from the
[official repo](http://software.opensuse.org/package/ripgrep):
```
$ sudo zypper install ripgrep
```
If you're a **RHEL/CentOS 7** user, you can install ripgrep from [copr](https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/):
If you're a **RHEL/CentOS 7** user, you can install ripgrep from
[copr](https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/):
```
$ sudo yum-config-manager --add-repo=https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo
@@ -271,8 +288,14 @@ then ripgrep can be installed using a binary `.deb` file provided in each
ripgrep is not in the official Debian or Ubuntu repositories.
```
$ curl -LO https://github.com/BurntSushi/ripgrep/releases/download/0.8.1/ripgrep_0.8.1_amd64.deb
$ sudo dpkg -i ripgrep_0.8.1_amd64.deb
$ curl -LO https://github.com/BurntSushi/ripgrep/releases/download/0.9.0/ripgrep_0.9.0_amd64.deb
$ sudo dpkg -i ripgrep_0.9.0_amd64.deb
```
If you run Debian Buster (currently Debian testing) or Debian sid, ripgrep is
[officially maintained by Debian](https://tracker.debian.org/pkg/rust-ripgrep).
```
$ sudo apt-get install ripgrep
```
(N.B. Various snaps for ripgrep on Ubuntu are also available, but none of them
@@ -280,25 +303,29 @@ seem to work right and generate a number of very strange bug reports that I
don't know how to fix and don't have the time to fix. Therefore, it is no
longer a recommended installation option.)
If you're a **FreeBSD** user, then you can install ripgrep from the [official ports](https://www.freshports.org/textproc/ripgrep/):
If you're a **FreeBSD** user, then you can install ripgrep from the
[official ports](https://www.freshports.org/textproc/ripgrep/):
```
# pkg install ripgrep
```
If you're an **OpenBSD** user, then you can install ripgrep from the [official ports](http://openports.se/textproc/ripgrep):
If you're an **OpenBSD** user, then you can install ripgrep from the
[official ports](http://openports.se/textproc/ripgrep):
```
$ doas pkg_add ripgrep
```
If you're a **NetBSD** user, then you can install ripgrep from [pkgsrc](http://pkgsrc.se/textproc/ripgrep):
If you're a **NetBSD** user, then you can install ripgrep from
[pkgsrc](http://pkgsrc.se/textproc/ripgrep):
```
# pkgin install ripgrep
```
If you're a **Rust programmer**, ripgrep can be installed with `cargo`.
* Note that the minimum supported version of Rust for ripgrep is **1.23.0**,
although ripgrep may work with older versions.
* Note that the binary may be bigger than expected because it contains debug
@@ -347,6 +374,35 @@ are not necessary to get SIMD optimizations for search; those are enabled
automatically. Hopefully, some day, the `simd-accel` and `avx-accel` features
will similarly become unnecessary.
Finally, optional PCRE2 support can be built with ripgrep by enabling the
`pcre2` feature:
```
$ cargo build --release --features 'pcre2'
```
(Tip: use `--features 'pcre2 simd-accel avx-accel'` to also include compile
time SIMD optimizations.)
Enabling the PCRE2 feature will attempt to automatically find and link with
your system's PCRE2 library via `pkg-config`. If one doesn't exist, then
ripgrep will build PCRE2 from source using your system's C compiler and then
statically link it into the final executable. Static linking can be forced even
when there is an available PCRE2 system library by either building ripgrep with
the MUSL target or by setting `PCRE2_SYS_STATIC=1`.
ripgrep can be built with the MUSL target on Linux by first installing the MUSL
library on your system (consult your friendly neighborhood package manager).
Then you just need to add MUSL support to your Rust toolchain and rebuild
ripgrep, which yields a fully static executable:
```
$ rustup target add x86_64-unknown-linux-musl
$ cargo build --release --target x86_64-unknown-linux-musl
```
Applying the `--features` flag from above works as expected.
### Running tests

View File

@@ -1,8 +1,6 @@
# Inspired from https://github.com/habitat-sh/habitat/blob/master/appveyor.yml
cache:
- c:\cargo\registry
- c:\cargo\git
- c:\projects\ripgrep\target
init:
- mkdir c:\cargo
@@ -19,14 +17,20 @@ environment:
PROJECT_NAME: ripgrep
RUST_BACKTRACE: full
matrix:
- TARGET: i686-pc-windows-gnu
CHANNEL: stable
- TARGET: i686-pc-windows-msvc
CHANNEL: stable
- TARGET: x86_64-pc-windows-gnu
CHANNEL: stable
BITS: 64
MSYS2: 1
- TARGET: x86_64-pc-windows-msvc
CHANNEL: stable
BITS: 64
- TARGET: i686-pc-windows-gnu
CHANNEL: stable
BITS: 32
MSYS2: 1
- TARGET: i686-pc-windows-msvc
CHANNEL: stable
BITS: 32
matrix:
fast_finish: true
@@ -35,8 +39,9 @@ matrix:
# (Based on from https://github.com/rust-lang/libc/blob/master/appveyor.yml)
install:
- curl -sSf -o rustup-init.exe https://win.rustup.rs/
- rustup-init.exe -y --default-host %TARGET% --no-modify-path
- if defined MSYS2_BITS set PATH=%PATH%;C:\msys64\mingw%MSYS2_BITS%\bin
- rustup-init.exe -y --default-host %TARGET%
- set PATH=%PATH%;C:\Users\appveyor\.cargo\bin
- if defined MSYS2 set PATH=C:\msys64\mingw%BITS%\bin;%PATH%
- rustc -V
- cargo -V
@@ -46,11 +51,11 @@ build: false
# Equivalent to Travis' `script` phase
# TODO modify this phase as you see fit
test_script:
- cargo test --verbose --all
- cargo test --verbose --all --features pcre2
before_deploy:
# Generate artifacts for release
- cargo build --release
- cargo build --release --features pcre2
- mkdir staging
- copy target\release\rg.exe staging
- ps: copy target\release\build\ripgrep-*\out\_rg.ps1 staging
@@ -78,7 +83,6 @@ branches:
only:
- /\d+\.\d+\.\d+/
- master
- ag/libripgrep
# - appveyor
# - /\d+\.\d+\.\d+/
# except:

View File

@@ -4,6 +4,7 @@ extern crate clap;
extern crate lazy_static;
use std::env;
use std::ffi::OsString;
use std::fs::{self, File};
use std::io::{self, Read, Write};
use std::path::Path;
@@ -18,6 +19,22 @@ use app::{RGArg, RGArgKind};
mod app;
fn main() {
// If our version of Rust has runtime SIMD detection, then set a cfg so
// we know we can test for it. We use this when generating ripgrep's
// --version output.
let version = rustc_version();
let parsed = match Version::parse(&version) {
Ok(parsed) => parsed,
Err(err) => {
eprintln!("failed to parse `rustc --version`: {}", err);
return;
}
};
let minimum = Version { major: 1, minor: 27, patch: 0 };
if version.contains("nightly") || parsed >= minimum {
println!("cargo:rustc-cfg=ripgrep_runtime_cpu");
}
// OUT_DIR is set by Cargo and it's where any additional build artifacts
// are written.
let outdir = match env::var_os("OUT_DIR") {
@@ -182,3 +199,63 @@ fn formatted_doc_txt(arg: &RGArg) -> io::Result<String> {
fn ioerr(msg: String) -> io::Error {
io::Error::new(io::ErrorKind::Other, msg)
}
fn rustc_version() -> String {
let rustc = env::var_os("RUSTC").unwrap_or(OsString::from("rustc"));
let output = process::Command::new(&rustc)
.arg("--version")
.output()
.unwrap()
.stdout;
String::from_utf8(output).unwrap()
}
#[derive(Clone, Copy, Debug, Eq, PartialEq, PartialOrd, Ord)]
struct Version {
major: u32,
minor: u32,
patch: u32,
}
impl Version {
fn parse(mut s: &str) -> Result<Version, String> {
if !s.starts_with("rustc ") {
return Err(format!("unrecognized version string: {}", s));
}
s = &s["rustc ".len()..];
let parts: Vec<&str> = s.split(".").collect();
if parts.len() < 3 {
return Err(format!("not enough version parts: {:?}", parts));
}
let mut num = String::new();
for c in parts[0].chars() {
if !c.is_digit(10) {
break;
}
num.push(c);
}
let major = num.parse::<u32>().map_err(|e| e.to_string())?;
num.clear();
for c in parts[1].chars() {
if !c.is_digit(10) {
break;
}
num.push(c);
}
let minor = num.parse::<u32>().map_err(|e| e.to_string())?;
num.clear();
for c in parts[2].chars() {
if !c.is_digit(10) {
break;
}
num.push(c);
}
let patch = num.parse::<u32>().map_err(|e| e.to_string())?;
Ok(Version { major, minor, patch })
}
}

View File

@@ -8,7 +8,11 @@ set -ex
# Generate artifacts for release
mk_artifacts() {
cargo build --target "$TARGET" --release
if is_arm; then
cargo build --target "$TARGET" --release
else
cargo build --target "$TARGET" --release --features 'pcre2'
fi
}
mk_tarball() {

View File

@@ -8,7 +8,11 @@ set -ex
main() {
# Test a normal debug build.
cargo build --target "$TARGET" --verbose --all
if is_arm; then
cargo build --target "$TARGET" --verbose
else
cargo build --target "$TARGET" --verbose --all --features 'pcre2'
fi
# Show the output of the most recent build.rs stderr.
set +x
@@ -40,7 +44,7 @@ main() {
"$(dirname "${0}")/test_complete.sh"
# Run tests for ripgrep and all sub-crates.
cargo test --target "$TARGET" --verbose --all
cargo test --target "$TARGET" --verbose --all --features 'pcre2'
}
main

View File

@@ -39,12 +39,14 @@ main() {
print -rl - 'Comparing options:' "-$rg" "+$_rg"
# 'Parse' options out of the `--help` output. To prevent false positives we
# only look at lines where the first non-white-space character is `-`
# only look at lines where the first non-white-space character is `-`, or
# where a long option starting with certain letters (see `_rg`) is found.
# Occasionally we may have to handle some manually, however
help_args=( ${(f)"$(
$rg --help |
$rg -- '^\s*-' |
$rg -io -- '[\t ,](-[a-z0-9]|--[a-z0-9-]+)\b' |
tr -d '\t ,' |
$rg -i -- '^\s+--?[a-z0-9]|--[imnp]' |
$rg -ior '$1' -- $'[\t /\"\'`.,](-[a-z0-9]|--[a-z0-9-]+)\\b' |
$rg -v -- --print0 | # False positives
sort -u
)"} )
@@ -58,8 +60,6 @@ main() {
comp_args=( ${comp_args%%-[:[]*} ) # Strip everything after -optname-
comp_args=( ${comp_args%%[:+=[]*} ) # Strip everything after other optspecs
comp_args=( ${comp_args##[^-]*} ) # Remove non-options
# This probably isn't necessary, but we should ensure the same order
comp_args=( ${(f)"$( print -rl - $comp_args | sort -u )"} )
(( $#help_args )) || {

View File

@@ -55,13 +55,6 @@ gcc_prefix() {
esac
}
is_ssse3_target() {
case "$(architecture)" in
amd64) return 0 ;;
*) return 1 ;;
esac
}
is_x86() {
case "$(architecture)" in
amd64|i386) return 0 ;;

View File

@@ -6,8 +6,8 @@
# Run ci/test_complete.sh after building to ensure that the options supported by
# this function stay in synch with the `rg` binary.
#
# @see http://zsh.sourceforge.net/Doc/Release/Completion-System.html
# @see https://github.com/zsh-users/zsh/blob/master/Etc/completion-style-guide
# For convenience, a completion reference guide is included at the bottom of
# this file.
#
# Originally based on code from the zsh-users project — see copyright notice
# below.
@@ -26,8 +26,10 @@ _rg() {
# style set. Note that this prefix check has to be updated manually to account
# for all of the potential negation options listed below!
if
# (--[imn]* => --ignore*, --messages, --no-*)
[[ $PREFIX$SUFFIX == --[imn]* ]] ||
# We also want to list all of these options during testing
[[ $_RG_COMPLETE_LIST_ARGS == (1|t*|y*) ]] ||
# (--[imnp]* => --ignore*, --messages, --no-*, --pcre2-unicode)
[[ $PREFIX$SUFFIX == --[imnp]* ]] ||
zstyle -t ":complete:$curcontext:*" complete-all
then
no=
@@ -61,8 +63,12 @@ _rg() {
$no"--no-column[don't show column numbers for matches]"
+ '(count)' # Counting options
'(passthru)'{-c,--count}'[only show count of matching lines for each file]'
'(passthru)--count-matches[only show count of individual matches for each file]'
{-c,--count}'[only show count of matching lines for each file]'
'--count-matches[only show count of individual matches for each file]'
+ '(encoding)' # Encoding options
{-E+,--encoding=}'[specify text encoding of files to search]: :_rg_encodings'
$no'--no-encoding[use default text encoding]'
+ file # File-input options
'*'{-f+,--file=}'[specify file containing patterns to search for]: :_files'
@@ -111,10 +117,19 @@ _rg() {
"--no-ignore-vcs[don't respect version control ignore files]"
$no'--ignore-vcs[respect version control ignore files]'
+ '(line)' # Line-number options
+ '(json)' # JSON options
'--json[output results in JSON Lines format]'
$no"--no-json[don't output results in JSON Lines format]"
+ '(line-number)' # Line-number options
{-n,--line-number}'[show line numbers for matches]'
{-N,--no-line-number}"[don't show line numbers for matches]"
+ '(line-terminator)' # Line-terminator options
'--crlf[use CRLF as line terminator]'
$no"--no-crlf[don't use CRLF as line terminator]"
'(text)--null-data[use NUL as line terminator]'
+ '(max-depth)' # Directory-depth options
'--max-depth=[specify max number of directories to descend]:number of directories'
'!--maxdepth=:number of directories'
@@ -131,16 +146,28 @@ _rg() {
'--mmap[search using memory maps when possible]'
"--no-mmap[don't search using memory maps]"
+ '(multiline)' # multiline options
'--multiline[permit matching across multiple lines]'
$no"--no-multiline[restrict matches to at most one line each]"
+ '(multiline)' # Multiline options
{-U,--multiline}'[permit matching across multiple lines]'
$no'(multiline-dotall)--no-multiline[restrict matches to at most one line each]'
+ '(multiline-dotall)' # Multiline DOTALL options
'(--no-multiline)--multiline-dotall[allow "." to match newline (with -U)]'
$no"(--no-multiline)--no-multiline-dotall[don't allow \".\" to match newline (with -U)]"
+ '(only)' # Only-match options
'(passthru replace)'{-o,--only-matching}'[show only matching part of each line]'
{-o,--only-matching}'[show only matching part of each line]'
+ '(passthru)' # Pass-through options
'(--vimgrep count only replace)--passthru[show both matching and non-matching lines]'
'!(--vimgrep count only replace)--passthrough'
'(--vimgrep)--passthru[show both matching and non-matching lines]'
'!(--vimgrep)--passthrough'
+ '(pcre2)' # PCRE2 options
{-P,--pcre2}'[enable matching with PCRE2]'
$no'(pcre2-unicode)--no-pcre2[disable matching with PCRE2]'
+ '(pcre2-unicode)' # PCRE2 Unicode options
$no'(--no-pcre2-unicode)--pcre2-unicode[enable PCRE2 Unicode mode (with -P)]'
'(--no-pcre2-unicode)--no-pcre2-unicode[disable PCRE2 Unicode mode (with -P)]'
+ '(pre)' # Preprocessing options
'(-z --search-zip)--pre=[specify preprocessor utility]:preprocessor utility:_command_names -e'
@@ -154,22 +181,27 @@ _rg() {
'(1 file)*'{-e+,--regexp=}'[specify pattern]:pattern'
+ '(replace)' # Replacement options
'(count only passthru)'{-r+,--replace=}'[specify string used to replace matches]:replace string'
{-r+,--replace=}'[specify string used to replace matches]:replace string'
+ '(sort)' # File-sorting options
'(threads)--sort-files[sort results by file path (disables parallelism)]'
$no"--no-sort-files[don't sort results by file path]"
+ stats # Statistics options
+ '(stats)' # Statistics options
'(--files file-match)--stats[show search statistics]'
$no"--no-stats[don't show search statistics]"
+ '(text)' # Binary-search options
{-a,--text}'[search binary files as if they were text]'
$no"--no-text[don't search binary files as if they were text]"
$no"(--null-data)--no-text[don't search binary files as if they were text]"
+ '(threads)' # Thread-count options
'(--sort-files)'{-j+,--threads=}'[specify approximate number of threads to use]:number of threads'
+ '(trim)' # Trim options
'--trim[trim any ASCII whitespace prefix from each line]'
$no"--no-trim[don't trim ASCII whitespace prefix from each line]"
+ type # Type options
'*'{-t+,--type=}'[only search files matching specified type]: :_rg_types'
'*--type-add=[add new glob for specified file type]: :->typespec'
@@ -198,7 +230,6 @@ _rg() {
'--context-separator=[specify string used to separate non-continuous context lines in output]:separator'
'--debug[show debug messages]'
'--dfa-size-limit=[specify upper size limit of generated DFA]:DFA size (bytes)'
'(-E --encoding)'{-E+,--encoding=}'[specify text encoding of files to search]: :_rg_encodings'
"(1 stats)--files[show each file that would be searched (but don't search)]"
'*--ignore-file=[specify additional ignore file]:ignore file:_files'
'(-v --invert-match)'{-v,--invert-match}'[invert matching]'
@@ -331,6 +362,157 @@ _rg_types() {
_rg "$@"
################################################################################
# ZSH COMPLETION REFERENCE
#
# For the convenience of developers who aren't especially familiar with zsh
# completion functions, a brief reference guide follows. This is in no way
# comprehensive; it covers just enough of the basic structure, syntax, and
# conventions to help someone make simple changes like adding new options. For
# more complete documentation regarding zsh completion functions, please see the
# following:
#
# * http://zsh.sourceforge.net/Doc/Release/Completion-System.html
# * https://github.com/zsh-users/zsh/blob/master/Etc/completion-style-guide
#
# OVERVIEW
#
# Most zsh completion functions are defined in terms of `_arguments`, which is a
# shell function that takes a series of argument specifications. The specs for
# `rg` are stored in an array, which is common for more complex functions; the
# elements of the array are passed to `_arguments` on invocation.
#
# ARGUMENT-SPECIFICATION SYNTAX
#
# The following is a contrived example of the argument specs for a simple tool:
#
# '(: * -)'{-h,--help}'[display help information]'
# '(-q -v --quiet --verbose)'{-q,--quiet}'[decrease output verbosity]'
# '!(-q -v --quiet --verbose)--silent'
# '(-q -v --quiet --verbose)'{-v,--verbose}'[increase output verbosity]'
# '--color=[specify when to use colors]:when:(always never auto)'
# '*:example file:_files'
#
# Although there may appear to be six specs here, there are actually nine; we
# use brace expansion to combine specs for options that go by multiple names,
# like `-q` and `--quiet`. This is customary, and ties in with the fact that zsh
# merges completion possibilities together when they have the same description.
#
# The first line defines the option `-h`/`--help`. With most tools, it isn't
# useful to complete anything after `--help` because it effectively overrides
# all others; the `(: * -)` at the beginning of the spec tells zsh not to
# complete any other operands (`:` and `*`) or options (`-`) after this one has
# been used. The `[...]` at the end associates a description with `-h`/`--help`;
# as mentioned, zsh will see the identical descriptions and merge these options
# together when offering completion possibilities.
#
# The next line defines `-q`/`--quiet`. Here we don't want to suppress further
# completions entirely, but we don't want to offer `-q` if `--quiet` has been
# given (since they do the same thing), nor do we want to offer `-v` (since it
# doesn't make sense to be quiet and verbose at the same time). We don't need to
# tell zsh not to offer `--quiet` a second time, since that's the default
# behaviour, but since this line expands to two specs describing `-q` *and*
# `--quiet` we do need to explicitly list all of them here.
#
# The next line defines a hidden option `--silent` — maybe it's a deprecated
# synonym for `--quiet`. The leading `!` indicates that zsh shouldn't offer this
# option during completion. The benefit of providing a spec for an option that
# shouldn't be completed is that, if someone *does* use it, we can correctly
# suppress completion of other options afterwards.
#
# The next line defines `-v`/`--verbose`; this works just like `-q`/`--quiet`.
#
# The next line defines `--color`. In this example, `--color` doesn't have a
# corresponding short option, so we don't need to use brace expansion. Further,
# there are no other options it's exclusive with (just itself), so we don't need
# to define those at the beginning. However, it does take a mandatory argument.
# The `=` at the end of `--color=` indicates that the argument may appear either
# like `--color always` or like `--color=always`; this is how most GNU-style
# command-line tools work. The corresponding short option would normally use `+`
# — for example, `-c+` would allow either `-c always` or `-calways`. For this
# option, the arguments are known ahead of time, so we can simply list them in
# parentheses at the end (`when` is used as the description for the argument).
#
# The last line defines an operand (a non-option argument). In this example, the
# operand can be used any number of times (the leading `*`), and it should be a
# file path, so we tell zsh to call the `_files` function to complete it. The
# `example file` in the middle is the description to use for this operand; we
# could use a space instead to accept the default provided by `_files`.
#
# GROUPING ARGUMENT SPECIFICATIONS
#
# Newer versions of zsh support grouping argument specs together. All specs
# following a `+` and then a group name are considered to be members of the
# named group. Grouping is useful mostly for organisational purposes; it makes
# the relationship between different options more obvious, and makes it easier
# to specify exclusions.
#
# We could rewrite our example above using grouping as follows:
#
# '(: * -)'{-h,--help}'[display help information]'
# '--color=[specify when to use colors]:when:(always never auto)'
# '*:example file:_files'
# + '(verbosity)'
# {-q,--quiet}'[decrease output verbosity]'
# '!--silent'
# {-v,--verbose}'[increase output verbosity]'
#
# Here we take advantage of a useful feature of spec grouping — when the group
# name is surrounded by parentheses, as in `(verbosity)`, it tells zsh that all
# of the options in that group are exclusive with each other. As a result, we
# don't need to manually list out the exclusions at the beginning of each
# option.
#
# Groups can also be referred to by name in other argument specs; for example:
#
# '(xyz)--aaa' '*: :_files'
# + xyz --xxx --yyy --zzz
#
# Here we use the group name `xyz` to tell zsh that `--xxx`, `--yyy`, and
# `--zzz` are not to be completed after `--aaa`. This makes the exclusion list
# much more compact and reusable.
#
# CONVENTIONS
#
# zsh completion functions generally adhere to the following conventions:
#
# * Use two spaces for indentation
# * Combine specs for options with different names using brace expansion
# * In combined specs, list the short option first (as in `{-a,--text}`)
# * Use `+` or `=` as described above for options that take arguments
# * Provide a description for all options, option-arguments, and operands
# * Capitalise/punctuate argument descriptions as phrases, not complete
# sentences — 'display help information', never 'Display help information.'
# (but still capitalise acronyms and proper names)
# * Write argument descriptions as verb phrases — 'display x', 'enable y',
# 'use z'
# * Word descriptions to make it clear when an option expects an argument;
# usually this is done with the word 'specify', as in 'specify x' or
# 'use specified x')
# * Write argument descriptions as tersely as possible — for example, articles
# like 'a' and 'the' should be omitted unless it would be confusing
#
# Other conventions currently used by this function:
#
# * Order argument specs alphabetically by group name, then option name
# * Group options that are directly related, mutually exclusive, or frequently
# referenced by other argument specs
# * Use only characters in the set [a-z0-9_-] in group names
# * Order exclusion lists as follows: short options, long options, groups
# * Use American English in descriptions
# * Use 'don't' in descriptions instead of 'do not'
# * Word descriptions for related options as similarly as possible. For example,
# `--foo[enable foo]` and `--no-foo[disable foo]`, or `--foo[use foo]` and
# `--no-foo[don't use foo]`
# * Word descriptions to make it clear when an option only makes sense with
# another option, usually by adding '(with -x)' to the end
# * Don't quote strings or variables unnecessarily. When quotes are required,
# prefer single-quotes to double-quotes
# * Prefix option specs with `$no` when the option serves only to negate the
# behaviour of another option that must be provided explicitly by the user.
# This prevents rarely used options from cluttering up the completion menu
################################################################################
# ------------------------------------------------------------------------------
# Copyright (c) 2011 Github zsh-users - http://github.com/zsh-users
# All rights reserved.

View File

@@ -4,7 +4,7 @@ Cross platform single glob and glob set matching. Glob set matching is the
process of matching one or more glob patterns against a single candidate path
simultaneously, and returning all of the globs that matched.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.png)](https://travis-ci.org/BurntSushi/ripgrep)
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/globset.svg)](https://crates.io/crates/globset)

View File

@@ -470,7 +470,6 @@ impl GlobSetBuilder {
}
/// Add a new pattern to this set.
#[allow(dead_code)]
pub fn add(&mut self, pat: Glob) -> &mut GlobSetBuilder {
self.pats.push(pat);
self

View File

@@ -1,6 +1,6 @@
[package]
name = "grep-matcher"
version = "0.0.1" #:version
version = "0.1.0" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
A trait for regular expressions, with a focus on line oriented search.

View File

@@ -1,4 +1,36 @@
grep
----
This is a *library* that provides grep-style line-by-line regex searching (with
comparable performance to `grep` itself).
grep-matcher
------------
This crate provides a low level interface for describing regular expression
matchers. The `grep` crate uses this interface in order to make the regex
engine it uses pluggable.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/grep-matcher.svg)](https://crates.io/crates/grep-matcher)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/grep-matcher](https://docs.rs/grep-matcher)
**NOTE:** You probably don't want to use this crate directly. Instead, you
should prefer the facade defined in the
[`grep`](https://docs.rs/grep)
crate.
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
grep-matcher = "0.1"
```
and this to your crate root:
```rust
extern crate grep_matcher;
```

View File

@@ -1,5 +1,39 @@
/*!
An interface for regular expressions, with a focus on line oriented search.
This crate provides an interface for regular expressions, with a focus on line
oriented search. The purpose of this crate is to provide a low level matching
interface that permits any kind of substring or regex implementation to power
the search routines provided by the
[`grep-searcher`](https://docs.rs/grep-searcher)
crate.
The primary thing provided by this crate is the
[`Matcher`](trait.Matcher.html)
trait. The trait defines an abstract interface for text search. It is robust
enough to support everything from basic substring search all the way to
arbitrarily complex regular expression implementations without sacrificing
performance.
A key design decision made in this crate is the use of *internal iteration*,
or otherwise known as the "push" model of searching. In this paradigm,
implementations of the `Matcher` trait will drive search and execute callbacks
provided by the caller when a match is found. This is in contrast to the
usual style of *external iteration* (the "pull" model) found throughout the
Rust ecosystem. There are two primary reasons why internal iteration was
chosen:
* Some search implementations may themselves require internal iteration.
Converting an internal iterator to an external iterator can be non-trivial
and sometimes even practically impossible.
* Rust's type system isn't quite expressive enough to write a generic interface
using external iteration without giving something else up (namely, ease of
use and/or performance).
In other words, internal iteration was chosen because it is the lowest common
denominator and because it is probably the least bad way of expressing the
interface in today's Rust. As a result, this trait isn't specifically intended
for everyday use, although, you might find it to be a happy price to pay if you
want to write code that is generic over multiple different regex
implementations.
*/
#![deny(missing_docs)]
@@ -186,6 +220,7 @@ enum LineTerminatorImp {
impl LineTerminator {
/// Return a new single-byte line terminator. Any byte is valid.
#[inline]
pub fn byte(byte: u8) -> LineTerminator {
LineTerminator(LineTerminatorImp::Byte([byte]))
}
@@ -194,11 +229,13 @@ impl LineTerminator {
///
/// When this option is used, consumers may generally treat a lone `\n` as
/// a line terminator in addition to `\r\n`.
#[inline]
pub fn crlf() -> LineTerminator {
LineTerminator(LineTerminatorImp::CRLF)
}
/// Returns true if and only if this line terminator is CRLF.
#[inline]
pub fn is_crlf(&self) -> bool {
self.0 == LineTerminatorImp::CRLF
}
@@ -208,6 +245,7 @@ impl LineTerminator {
/// If the line terminator is CRLF, then this returns `\n`. This is
/// useful for routines that, for example, find line boundaries by treating
/// `\n` as a line terminator even when it isn't preceded by `\r`.
#[inline]
pub fn as_byte(&self) -> u8 {
match self.0 {
LineTerminatorImp::Byte(array) => array[0],
@@ -221,6 +259,7 @@ impl LineTerminator {
/// `CRLF`, in which case, it returns `\r\n`.
///
/// The slice returned is guaranteed to have length at least `1`.
#[inline]
pub fn as_bytes(&self) -> &[u8] {
match self.0 {
LineTerminatorImp::Byte(ref array) => array,
@@ -230,6 +269,7 @@ impl LineTerminator {
}
impl Default for LineTerminator {
#[inline]
fn default() -> LineTerminator {
LineTerminator::byte(b'\n')
}
@@ -324,12 +364,12 @@ impl ByteSet {
///
/// Principally, this trait provides a way to access capturing groups
/// in a uniform way that does not require any specific representation.
/// Namely, differ matcher implementations may require different in-memory
/// Namely, different matcher implementations may require different in-memory
/// representations of capturing groups. This trait permits matchers to
/// maintain their specific in-memory representation.
///
/// Note that this trait explicitly does not provide a way to construct a new
/// captures value. Instead, it is the responsibility of a `Matcher` to build
/// capture value. Instead, it is the responsibility of a `Matcher` to build
/// one, which might require knowledge of the matcher's internal implementation
/// details.
pub trait Captures {
@@ -426,7 +466,7 @@ impl Captures for NoCaptures {
/// This error type implements the `std::error::Error` and `fmt::Display`
/// traits for use in matcher implementations that can never produce errors.
///
/// The `fmt::Display` impl for this type panics.
/// The `fmt::Debug` and `fmt::Display` impls for this type panics.
#[derive(Debug, Eq, PartialEq)]
pub struct NoError(());
@@ -463,6 +503,20 @@ pub enum LineMatchKind {
}
/// A matcher defines an interface for regular expression implementations.
///
/// While this trait is large, there are only two required methods that
/// implementors must provide: `find_at` and `new_captures`. If captures
/// aren't supported by your implementation, then `new_captures` can be
/// implemented with
/// [`NoCaptures`](struct.NoCaptures.html). If your implementation does support
/// capture groups, then you should also implement the other capture related
/// methods, as dictated by the documentation. Crucially, this includes
/// `captures_at`.
///
/// The rest of the methods on this trait provide default implementations on
/// top of `find_at` and `new_captures`. It is not uncommon for implementations
/// to be able to provide faster variants of some methods; in those cases,
/// simply override the default implementation.
pub trait Matcher {
/// The concrete type of capturing groups used for this matcher.
///

17
grep-pcre2/Cargo.toml Normal file
View File

@@ -0,0 +1,17 @@
[package]
name = "grep-pcre2"
version = "0.1.0" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
Use PCRE2 with the 'grep' crate.
"""
documentation = "https://docs.rs/grep-pcre2"
homepage = "https://github.com/BurntSushi/ripgrep"
repository = "https://github.com/BurntSushi/ripgrep"
readme = "README.md"
keywords = ["regex", "grep", "pcre", "backreference", "look"]
license = "Unlicense/MIT"
[dependencies]
grep-matcher = { version = "0.1.0", path = "../grep-matcher" }
pcre2 = "0.1"

21
grep-pcre2/LICENSE-MIT Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015 Andrew Gallant
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

39
grep-pcre2/README.md Normal file
View File

@@ -0,0 +1,39 @@
grep-pcre2
----------
The `grep-pcre2` crate provides an implementation of the `Matcher` trait from
the `grep-matcher` crate. This implementation permits PCRE2 to be used in the
`grep` crate for fast line oriented searching.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/grep-pcre2.svg)](https://crates.io/crates/grep-pcre2)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/grep-pcre2](https://docs.rs/grep-pcre2)
**NOTE:** You probably don't want to use this crate directly. Instead, you
should prefer the facade defined in the
[`grep`](https://docs.rs/grep)
crate.
If you're looking to just use PCRE2 from Rust, then you probably want the
[`pcre2`](https://docs.rs/pcre2)
crate, which provide high level safe bindings to PCRE2.
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
grep-pcre2 = "0.1"
```
and this to your crate root:
```rust
extern crate grep_pcre2;
```

24
grep-pcre2/UNLICENSE Normal file
View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

59
grep-pcre2/src/error.rs Normal file
View File

@@ -0,0 +1,59 @@
use std::error;
use std::fmt;
/// An error that can occur in this crate.
///
/// Generally, this error corresponds to problems building a regular
/// expression, whether it's in parsing, compilation or a problem with
/// guaranteeing a configured optimization.
#[derive(Clone, Debug)]
pub struct Error {
kind: ErrorKind,
}
impl Error {
pub(crate) fn regex<E: error::Error>(err: E) -> Error {
Error { kind: ErrorKind::Regex(err.to_string()) }
}
/// Return the kind of this error.
pub fn kind(&self) -> &ErrorKind {
&self.kind
}
}
/// The kind of an error that can occur.
#[derive(Clone, Debug)]
pub enum ErrorKind {
/// An error that occurred as a result of parsing a regular expression.
/// This can be a syntax error or an error that results from attempting to
/// compile a regular expression that is too big.
///
/// The string here is the underlying error converted to a string.
Regex(String),
/// Hints that destructuring should not be exhaustive.
///
/// This enum may grow additional variants, so this makes sure clients
/// don't count on exhaustive matching. (Otherwise, adding a new variant
/// could break existing code.)
#[doc(hidden)]
__Nonexhaustive,
}
impl error::Error for Error {
fn description(&self) -> &str {
match self.kind {
ErrorKind::Regex(_) => "regex error",
ErrorKind::__Nonexhaustive => unreachable!(),
}
}
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self.kind {
ErrorKind::Regex(ref s) => write!(f, "{}", s),
ErrorKind::__Nonexhaustive => unreachable!(),
}
}
}

15
grep-pcre2/src/lib.rs Normal file
View File

@@ -0,0 +1,15 @@
/*!
An implementation of `grep-matcher`'s `Matcher` trait for
[PCRE2](https://www.pcre.org/).
*/
#![deny(missing_docs)]
extern crate grep_matcher;
extern crate pcre2;
pub use error::{Error, ErrorKind};
pub use matcher::{RegexCaptures, RegexMatcher, RegexMatcherBuilder};
mod error;
mod matcher;

425
grep-pcre2/src/matcher.rs Normal file
View File

@@ -0,0 +1,425 @@
use std::collections::HashMap;
use grep_matcher::{Captures, Match, Matcher};
use pcre2::bytes::{CaptureLocations, Regex, RegexBuilder};
use error::Error;
/// A builder for configuring the compilation of a PCRE2 regex.
#[derive(Clone, Debug)]
pub struct RegexMatcherBuilder {
builder: RegexBuilder,
case_smart: bool,
word: bool,
}
impl RegexMatcherBuilder {
/// Create a new matcher builder with a default configuration.
pub fn new() -> RegexMatcherBuilder {
RegexMatcherBuilder {
builder: RegexBuilder::new(),
case_smart: false,
word: false,
}
}
/// Compile the given pattern into a PCRE matcher using the current
/// configuration.
///
/// If there was a problem compiling the pattern, then an error is
/// returned.
pub fn build(&self, pattern: &str) -> Result<RegexMatcher, Error> {
let mut builder = self.builder.clone();
if self.case_smart && !has_uppercase_literal(pattern) {
builder.caseless(true);
}
let res =
if self.word {
let pattern = format!(r"(?<!\w)(?:{})(?!\w)", pattern);
builder.build(&pattern)
} else {
builder.build(pattern)
};
res.map_err(Error::regex).map(|regex| {
let mut names = HashMap::new();
for (i, name) in regex.capture_names().iter().enumerate() {
if let Some(ref name) = *name {
names.insert(name.to_string(), i);
}
}
RegexMatcher { regex, names }
})
}
/// Enables case insensitive matching.
///
/// If the `utf` option is also set, then Unicode case folding is used
/// to determine case insensitivity. When the `utf` option is not set,
/// then only standard ASCII case insensitivity is considered.
///
/// This option corresponds to the `i` flag.
pub fn caseless(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.caseless(yes);
self
}
/// Whether to enable "smart case" or not.
///
/// When smart case is enabled, the builder will automatically enable
/// case insensitive matching based on how the pattern is written. Namely,
/// case insensitive mode is enabled when both of the following things
/// are believed to be true:
///
/// 1. The pattern contains at least one literal character. For example,
/// `a\w` contains a literal (`a`) but `\w` does not.
/// 2. Of the literals in the pattern, none of them are considered to be
/// uppercase according to Unicode. For example, `foo\pL` has no
/// uppercase literals but `Foo\pL` does.
///
/// Note that the implementation of this is not perfect. Namely, `\p{Ll}`
/// will prevent case insensitive matching even though it is part of a meta
/// sequence. This bug will probably never be fixed.
pub fn case_smart(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.case_smart = yes;
self
}
/// Enables "dot all" matching.
///
/// When enabled, the `.` metacharacter in the pattern matches any
/// character, include `\n`. When disabled (the default), `.` will match
/// any character except for `\n`.
///
/// This option corresponds to the `s` flag.
pub fn dotall(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.dotall(yes);
self
}
/// Enable "extended" mode in the pattern, where whitespace is ignored.
///
/// This option corresponds to the `x` flag.
pub fn extended(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.extended(yes);
self
}
/// Enable multiline matching mode.
///
/// When enabled, the `^` and `$` anchors will match both at the beginning
/// and end of a subject string, in addition to matching at the start of
/// a line and the end of a line. When disabled, the `^` and `$` anchors
/// will only match at the beginning and end of a subject string.
///
/// This option corresponds to the `m` flag.
pub fn multi_line(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.multi_line(yes);
self
}
/// Enable matching of CRLF as a line terminator.
///
/// When enabled, anchors such as `^` and `$` will match any of the
/// following as a line terminator: `\r`, `\n` or `\r\n`.
///
/// This is disabled by default, in which case, only `\n` is recognized as
/// a line terminator.
pub fn crlf(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.crlf(yes);
self
}
/// Require that all matches occur on word boundaries.
///
/// Enabling this option is subtly different than putting `\b` assertions
/// on both sides of your pattern. In particular, a `\b` assertion requires
/// that one side of it match a word character while the other match a
/// non-word character. This option, in contrast, merely requires that
/// one side match a non-word character.
///
/// For example, `\b-2\b` will not match `foo -2 bar` since `-` is not a
/// word character. However, `-2` with this `word` option enabled will
/// match the `-2` in `foo -2 bar`.
pub fn word(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.word = yes;
self
}
/// Enable Unicode matching mode.
///
/// When enabled, the following patterns become Unicode aware: `\b`, `\B`,
/// `\d`, `\D`, `\s`, `\S`, `\w`, `\W`.
///
/// When set, this implies UTF matching mode. It is not possible to enable
/// Unicode matching mode without enabling UTF matching mode.
///
/// This is disabled by default.
pub fn ucp(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.ucp(yes);
self
}
/// Enable UTF matching mode.
///
/// When enabled, characters are treated as sequences of code units that
/// make up a single codepoint instead of as single bytes. For example,
/// this will cause `.` to match any single UTF-8 encoded codepoint, where
/// as when this is disabled, `.` will any single byte (except for `\n` in
/// both cases, unless "dot all" mode is enabled).
///
/// Note that when UTF matching mode is enabled, every search performed
/// will do a UTF-8 validation check, which can impact performance. The
/// UTF-8 check can be disabled via the `disable_utf_check` option, but it
/// is undefined behavior to enable UTF matching mode and search invalid
/// UTF-8.
///
/// This is disabled by default.
pub fn utf(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.utf(yes);
self
}
/// When UTF matching mode is enabled, this will disable the UTF checking
/// that PCRE2 will normally perform automatically. If UTF matching mode
/// is not enabled, then this has no effect.
///
/// UTF checking is enabled by default when UTF matching mode is enabled.
/// If UTF matching mode is enabled and UTF checking is enabled, then PCRE2
/// will return an error if you attempt to search a subject string that is
/// not valid UTF-8.
///
/// # Safety
///
/// It is undefined behavior to disable the UTF check in UTF matching mode
/// and search a subject string that is not valid UTF-8. When the UTF check
/// is disabled, callers must guarantee that the subject string is valid
/// UTF-8.
pub unsafe fn disable_utf_check(&mut self) -> &mut RegexMatcherBuilder {
self.builder.disable_utf_check();
self
}
/// Enable PCRE2's JIT.
///
/// This generally speeds up matching quite a bit. The downside is that it
/// can increase the time it takes to compile a pattern.
///
/// This is disabled by default.
pub fn jit(&mut self, yes: bool) -> &mut RegexMatcherBuilder {
self.builder.jit(yes);
self
}
}
/// An implementation of the `Matcher` trait using PCRE2.
#[derive(Clone, Debug)]
pub struct RegexMatcher {
regex: Regex,
names: HashMap<String, usize>,
}
impl RegexMatcher {
/// Create a new matcher from the given pattern using the default
/// configuration.
pub fn new(pattern: &str) -> Result<RegexMatcher, Error> {
RegexMatcherBuilder::new().build(pattern)
}
}
impl Matcher for RegexMatcher {
type Captures = RegexCaptures;
type Error = Error;
fn find_at(
&self,
haystack: &[u8],
at: usize,
) -> Result<Option<Match>, Error> {
Ok(self.regex
.find_at(haystack, at)
.map_err(Error::regex)?
.map(|m| Match::new(m.start(), m.end())))
}
fn new_captures(&self) -> Result<RegexCaptures, Error> {
Ok(RegexCaptures::new(self.regex.capture_locations()))
}
fn capture_count(&self) -> usize {
self.regex.captures_len()
}
fn capture_index(&self, name: &str) -> Option<usize> {
self.names.get(name).map(|i| *i)
}
fn try_find_iter<F, E>(
&self,
haystack: &[u8],
mut matched: F,
) -> Result<Result<(), E>, Error>
where F: FnMut(Match) -> Result<bool, E>
{
for result in self.regex.find_iter(haystack) {
let m = result.map_err(Error::regex)?;
match matched(Match::new(m.start(), m.end())) {
Ok(true) => continue,
Ok(false) => return Ok(Ok(())),
Err(err) => return Ok(Err(err)),
}
}
Ok(Ok(()))
}
fn captures_at(
&self,
haystack: &[u8],
at: usize,
caps: &mut RegexCaptures,
) -> Result<bool, Error> {
Ok(self.regex
.captures_read_at(&mut caps.locs, haystack, at)
.map_err(Error::regex)?
.is_some())
}
}
/// Represents the match offsets of each capturing group in a match.
///
/// The first, or `0`th capture group, always corresponds to the entire match
/// and is guaranteed to be present when a match occurs. The next capture
/// group, at index `1`, corresponds to the first capturing group in the regex,
/// ordered by the position at which the left opening parenthesis occurs.
///
/// Note that not all capturing groups are guaranteed to be present in a match.
/// For example, in the regex, `(?P<foo>\w)|(?P<bar>\W)`, only one of `foo`
/// or `bar` will ever be set in any given match.
///
/// In order to access a capture group by name, you'll need to first find the
/// index of the group using the corresponding matcher's `capture_index`
/// method, and then use that index with `RegexCaptures::get`.
#[derive(Clone, Debug)]
pub struct RegexCaptures {
/// Where the locations are stored.
locs: CaptureLocations,
}
impl Captures for RegexCaptures {
fn len(&self) -> usize {
self.locs.len()
}
fn get(&self, i: usize) -> Option<Match> {
self.locs.get(i).map(|(s, e)| Match::new(s, e))
}
}
impl RegexCaptures {
pub(crate) fn new(locs: CaptureLocations) -> RegexCaptures {
RegexCaptures { locs }
}
}
/// Determine whether the pattern contains an uppercase character which should
/// negate the effect of the smart-case option.
///
/// Ideally we would be able to check the AST in order to correctly handle
/// things like '\p{Ll}' and '\p{Lu}' (which should be treated as explicitly
/// cased), but PCRE doesn't expose enough details for that kind of analysis.
/// For now, our 'good enough' solution is to simply perform a semi-naïve
/// scan of the input pattern and ignore all characters following a '\'. The
/// This at least lets us support the most common cases, like 'foo\w' and
/// 'foo\S', in an intuitive manner.
fn has_uppercase_literal(pattern: &str) -> bool {
let mut chars = pattern.chars();
while let Some(c) = chars.next() {
if c == '\\' {
chars.next();
} else if c.is_uppercase() {
return true;
}
}
false
}
#[cfg(test)]
mod tests {
use grep_matcher::{LineMatchKind, Matcher};
use super::*;
// Test that enabling word matches does the right thing and demonstrate
// the difference between it and surrounding the regex in `\b`.
#[test]
fn word() {
let matcher = RegexMatcherBuilder::new()
.word(true)
.build(r"-2")
.unwrap();
assert!(matcher.is_match(b"abc -2 foo").unwrap());
let matcher = RegexMatcherBuilder::new()
.word(false)
.build(r"\b-2\b")
.unwrap();
assert!(!matcher.is_match(b"abc -2 foo").unwrap());
}
// Test that enabling CRLF permits `$` to match at the end of a line.
#[test]
fn line_terminator_crlf() {
// Test normal use of `$` with a `\n` line terminator.
let matcher = RegexMatcherBuilder::new()
.multi_line(true)
.build(r"abc$")
.unwrap();
assert!(matcher.is_match(b"abc\n").unwrap());
// Test that `$` doesn't match at `\r\n` boundary normally.
let matcher = RegexMatcherBuilder::new()
.multi_line(true)
.build(r"abc$")
.unwrap();
assert!(!matcher.is_match(b"abc\r\n").unwrap());
// Now check the CRLF handling.
let matcher = RegexMatcherBuilder::new()
.multi_line(true)
.crlf(true)
.build(r"abc$")
.unwrap();
assert!(matcher.is_match(b"abc\r\n").unwrap());
}
// Test that smart case works.
#[test]
fn case_smart() {
let matcher = RegexMatcherBuilder::new()
.case_smart(true)
.build(r"abc")
.unwrap();
assert!(matcher.is_match(b"ABC").unwrap());
let matcher = RegexMatcherBuilder::new()
.case_smart(true)
.build(r"aBc")
.unwrap();
assert!(!matcher.is_match(b"ABC").unwrap());
}
// Test that finding candidate lines works as expected.
#[test]
fn candidate_lines() {
fn is_confirmed(m: LineMatchKind) -> bool {
match m {
LineMatchKind::Confirmed(_) => true,
_ => false,
}
}
let matcher = RegexMatcherBuilder::new()
.build(r"\wfoo\s")
.unwrap();
let m = matcher.find_candidate_line(b"afoo ").unwrap().unwrap();
assert!(is_confirmed(m));
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "grep-printer"
version = "0.0.1" #:version
version = "0.1.0" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
An implementation of the grep crate's Sink trait that provides standard
@@ -19,13 +19,12 @@ serde1 = ["base64", "serde", "serde_derive", "serde_json"]
[dependencies]
base64 = { version = "0.9", optional = true }
grep-matcher = { version = "0.0.1", path = "../grep-matcher" }
grep-searcher = { version = "0.0.1", path = "../grep-searcher" }
log = "0.4"
grep-matcher = { version = "0.1.0", path = "../grep-matcher" }
grep-searcher = { version = "0.1.0", path = "../grep-searcher" }
termcolor = "1"
serde = { version = "1", optional = true }
serde_derive = { version = "1", optional = true }
serde_json = { version = "1", optional = true }
[dev-dependencies]
grep-regex = { version = "0.0.1", path = "../grep-regex" }
grep-regex = { version = "0.1.0", path = "../grep-regex" }

View File

@@ -1,4 +1,35 @@
grep
----
This is a *library* that provides grep-style line-by-line regex searching (with
comparable performance to `grep` itself).
grep-printer
------------
Print results from line oriented searching in a human readable, aggregate or
JSON Lines format.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/grep-printer.svg)](https://crates.io/crates/grep-printer)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/grep-printer](https://docs.rs/grep-printer)
**NOTE:** You probably don't want to use this crate directly. Instead, you
should prefer the facade defined in the
[`grep`](https://docs.rs/grep)
crate.
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
grep-printer = "0.1"
```
and this to your crate root:
```rust
extern crate grep_printer;
```

View File

@@ -91,7 +91,7 @@ impl JSONBuilder {
/// When enabled, the `begin` and `end` messages are always emitted, even
/// when no match is found.
///
/// When disabled, the `begin` and `end` messages are only shown is there
/// When disabled, the `begin` and `end` messages are only shown if there
/// is at least one `match` or `context` message.
///
/// This is disabled by default.
@@ -108,7 +108,7 @@ impl JSONBuilder {
///
/// # Format
///
/// This section describe the JSON format used by this printer.
/// This section describes the JSON format used by this printer.
///
/// To skip the rigamarole, take a look at the
/// [example](#example)
@@ -619,6 +619,13 @@ impl<'p, 's, M: Matcher, W: io::Write> JSONSink<'p, 's, M, W> {
matches.push(m);
true
}).map_err(io::Error::error_message)?;
// Don't report empty matches appearing at the end of the bytes.
if !matches.is_empty()
&& matches.last().unwrap().is_empty()
&& matches.last().unwrap().start() >= bytes.len()
{
matches.pop().unwrap();
}
Ok(())
}

View File

@@ -1,7 +1,7 @@
/*!
This crate provides a featureful and fast printer for showing search results
in a human readable way, and another printer for showing results in a machine
readable way.
This crate provides featureful and fast printers that interoperate with the
[`grep-searcher`](https://docs.rs/grep-searcher)
crate.
# Brief overview
@@ -74,8 +74,6 @@ extern crate grep_matcher;
#[cfg(test)]
extern crate grep_regex;
extern crate grep_searcher;
#[macro_use]
extern crate log;
#[cfg(feature = "serde1")]
extern crate serde;
#[cfg(feature = "serde1")]

View File

@@ -1,4 +1,4 @@
use std::cell::RefCell;
use std::cell::{Cell, RefCell};
use std::cmp;
use std::io::{self, Write};
use std::path::Path;
@@ -139,6 +139,8 @@ impl StandardBuilder {
/// A [`UserColorSpec`](struct.UserColorSpec.html) can be constructed from
/// a string in accordance with the color specification format. See the
/// `UserColorSpec` type documentation for more details on the format.
/// A [`ColorSpecs`](struct.ColorSpecs.html) can then be generated from
/// zero or more `UserColorSpec`s.
///
/// Regardless of the color specifications provided here, whether color
/// is actually used or not is determined by the implementation of
@@ -475,6 +477,7 @@ impl<W: WriteColor> Standard<W> {
} else {
None
};
let needs_match_granularity = self.needs_match_granularity();
StandardSink {
matcher: matcher,
standard: self,
@@ -485,6 +488,7 @@ impl<W: WriteColor> Standard<W> {
after_context_remaining: 0,
binary_byte_offset: None,
stats: stats,
needs_match_granularity: needs_match_granularity,
}
}
@@ -511,6 +515,7 @@ impl<W: WriteColor> Standard<W> {
};
let ppath = PrinterPath::with_separator(
path.as_ref(), self.config.separator_path);
let needs_match_granularity = self.needs_match_granularity();
StandardSink {
matcher: matcher,
standard: self,
@@ -521,8 +526,32 @@ impl<W: WriteColor> Standard<W> {
after_context_remaining: 0,
binary_byte_offset: None,
stats: stats,
needs_match_granularity: needs_match_granularity,
}
}
/// Returns true if and only if the configuration of the printer requires
/// us to find each individual match in the lines reported by the searcher.
///
/// We care about this distinction because finding each individual match
/// costs more, so we only do it when we need to.
fn needs_match_granularity(&self) -> bool {
let supports_color = self.wtr.borrow().supports_color();
let match_colored = !self.config.colors.matched().is_none();
// Coloring requires identifying each individual match.
(supports_color && match_colored)
// The column feature requires finding the position of the first match.
|| self.config.column
// Requires finding each match for performing replacement.
|| self.config.replacement.is_some()
// Emitting a line for each match requires finding each match.
|| self.config.per_match
// Emitting only the match requires finding each match.
|| self.config.only_matching
// Computing certain statistics requires finding each match.
|| self.config.stats
}
}
impl<W> Standard<W> {
@@ -581,6 +610,7 @@ pub struct StandardSink<'p, 's, M: Matcher, W: 's> {
after_context_remaining: u64,
binary_byte_offset: Option<u64>,
stats: Option<Stats>,
needs_match_granularity: bool,
}
impl<'p, 's, M: Matcher, W: WriteColor> StandardSink<'p, 's, M, W> {
@@ -632,7 +662,7 @@ impl<'p, 's, M: Matcher, W: WriteColor> StandardSink<'p, 's, M, W> {
/// locations if the current configuration demands match granularity.
fn record_matches(&mut self, bytes: &[u8]) -> io::Result<()> {
self.standard.matches.clear();
if !self.needs_match_granularity() {
if !self.needs_match_granularity {
return Ok(());
}
// If printing requires knowing the location of each individual match,
@@ -647,6 +677,13 @@ impl<'p, 's, M: Matcher, W: WriteColor> StandardSink<'p, 's, M, W> {
matches.push(m);
true
}).map_err(io::Error::error_message)?;
// Don't report empty matches appearing at the end of the bytes.
if !matches.is_empty()
&& matches.last().unwrap().is_empty()
&& matches.last().unwrap().start() >= bytes.len()
{
matches.pop().unwrap();
}
Ok(())
}
@@ -670,29 +707,6 @@ impl<'p, 's, M: Matcher, W: WriteColor> StandardSink<'p, 's, M, W> {
Ok(())
}
/// Returns true if and only if the configuration of the printer requires
/// us to find each individual match in the lines reported by the searcher.
///
/// We care about this distinction because finding each individual match
/// costs more, so we only do it when we need to.
fn needs_match_granularity(&self) -> bool {
let supports_color = self.standard.wtr.borrow().supports_color();
let match_colored = !self.standard.config.colors.matched().is_none();
// Coloring requires identifying each individual match.
(supports_color && match_colored)
// The column feature requires finding the position of the first match.
|| self.standard.config.column
// Requires finding each match for performing replacement.
|| self.standard.config.replacement.is_some()
// Emitting a line for each match requires finding each match.
|| self.standard.config.per_match
// Emitting only the match requires finding each match.
|| self.standard.config.only_matching
// Computing certain statistics requires finding each match.
|| self.standard.config.stats
}
/// Returns true if this printer should quit.
///
/// This implements the logic for handling quitting after seeing a certain
@@ -805,6 +819,8 @@ struct StandardImpl<'a, M: 'a + Matcher, W: 'a> {
searcher: &'a Searcher,
sink: &'a StandardSink<'a, 'a, M, W>,
sunk: Sunk<'a>,
/// Set to true if and only if we are writing a match with color.
in_color_match: Cell<bool>,
}
impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
@@ -817,6 +833,7 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
searcher: searcher,
sink: sink,
sunk: Sunk::empty(),
in_color_match: Cell::new(false),
}
}
@@ -860,38 +877,14 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
self.write_search_prelude()?;
if self.sunk.matches().is_empty() {
if self.multi_line() && !self.is_context() {
trace!(
"{:?}:{:?}:{}: sinking via sink_fast_multi_line",
self.sink.path,
self.sunk.line_number(),
self.sunk.absolute_byte_offset()
);
self.sink_fast_multi_line()
} else {
trace!(
"{:?}:{:?}:{}: sinking via sink_fast",
self.sink.path,
self.sunk.line_number(),
self.sunk.absolute_byte_offset()
);
self.sink_fast()
}
} else {
if self.multi_line() && !self.is_context() {
trace!(
"{:?}:{:?}:{}: sinking via sink_slow_multi_line",
self.sink.path,
self.sunk.line_number(),
self.sunk.absolute_byte_offset()
);
self.sink_slow_multi_line()
} else {
trace!(
"{:?}:{:?}:{}: sinking via sink_slow",
self.sink.path,
self.sunk.line_number(),
self.sunk.absolute_byte_offset()
);
self.sink_slow()
}
}
@@ -992,7 +985,6 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
}
let line_term = self.searcher.line_terminator().as_byte();
let spec = self.config().colors.matched();
let bytes = self.sunk.bytes();
let matches = self.sunk.matches();
let mut midx = 0;
@@ -1014,7 +1006,7 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
line = line.with_end(line.end() - 1);
}
if self.config().trim_ascii {
line = trim_ascii_prefix_range(bytes, line);
line = self.trim_ascii_prefix_range(bytes, line);
}
while !line.is_empty() {
@@ -1023,6 +1015,7 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
midx += 1;
continue;
} else {
self.end_color_match()?;
self.write(&bytes[line])?;
break;
}
@@ -1031,14 +1024,17 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
if line.start() < m.start() {
let upto = cmp::min(line.end(), m.start());
self.end_color_match()?;
self.write(&bytes[line.with_end(upto)])?;
line = line.with_start(upto);
} else {
let upto = cmp::min(line.end(), m.end());
self.write_spec(spec, &bytes[line.with_end(upto)])?;
self.start_color_match()?;
self.write(&bytes[line.with_end(upto)])?;
line = line.with_start(upto);
}
}
self.end_color_match()?;
self.write_line_term()?;
}
Ok(())
@@ -1058,7 +1054,7 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
line = line.with_end(line.end() - 1);
}
if self.config().trim_ascii {
line = trim_ascii_prefix_range(bytes, line);
line = self.trim_ascii_prefix_range(bytes, line);
}
while !line.is_empty() {
if matches[midx].end() <= line.start() {
@@ -1127,7 +1123,7 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
line = line.with_end(line.end() - 1);
}
if self.config().trim_ascii {
line = trim_ascii_prefix_range(bytes, line);
line = self.trim_ascii_prefix_range(bytes, line);
}
while !line.is_empty() {
@@ -1153,6 +1149,7 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
/// Write the beginning part of a matching line. This (may) include things
/// like the file path, line number among others, depending on the
/// configuration and the parameters given.
#[inline(always)]
fn write_prelude(
&self,
absolute_byte_offset: u64,
@@ -1178,6 +1175,7 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
Ok(())
}
#[inline(always)]
fn write_line(
&self,
line: &[u8],
@@ -1208,27 +1206,27 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
if !self.config().trim_ascii {
0
} else {
trim_ascii_prefix_range(
self.trim_ascii_prefix_range(
line,
Match::new(0, line.len()),
).start()
};
for mut m in matches.iter().map(|&m| m) {
if last_written <= m.start() {
if last_written < m.start() {
self.end_color_match()?;
self.write(&line[last_written..m.start()])?;
} else if last_written < m.end() {
m = m.with_start(last_written);
} else {
continue;
}
last_written = m.end();
// This conditional checks if the match is both empty *and*
// past the end of the line. In this case, we never want to
// emit an additional color escape.
if m.start() != m.end() || m.end() != line.len() {
self.write_spec(spec, &line[m])?;
if !m.is_empty() {
self.start_color_match()?;
self.write(&line[m])?;
}
last_written = m.end();
}
self.end_color_match()?;
self.write(&line[last_written..])?;
if !self.has_line_terminator(line) {
self.write_line_term()?;
@@ -1365,11 +1363,29 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
Ok(())
}
fn start_color_match(&self) -> io::Result<()> {
if self.in_color_match.get() {
return Ok(());
}
self.wtr().borrow_mut().set_color(self.config().colors.matched())?;
self.in_color_match.set(true);
Ok(())
}
fn end_color_match(&self) -> io::Result<()> {
if !self.in_color_match.get() {
return Ok(());
}
self.wtr().borrow_mut().reset()?;
self.in_color_match.set(false);
Ok(())
}
fn write_trim(&self, buf: &[u8]) -> io::Result<()> {
if !self.config().trim_ascii {
return self.write(buf);
}
self.write(trim_ascii_prefix(buf))
self.write(self.trim_ascii_prefix(buf))
}
fn write(&self, buf: &[u8]) -> io::Result<()> {
@@ -1425,6 +1441,21 @@ impl<'a, M: Matcher, W: WriteColor> StandardImpl<'a, M, W> {
fn multi_line(&self) -> bool {
self.searcher.multi_line_with_matcher(&self.sink.matcher)
}
/// Trim prefix ASCII spaces from the given slice and return the
/// corresponding range.
///
/// This stops trimming a prefix as soon as it sees non-whitespace or a
/// line terminator.
fn trim_ascii_prefix_range(&self, slice: &[u8], range: Match) -> Match {
trim_ascii_prefix_range(self.searcher.line_terminator(), slice, range)
}
/// Trim prefix ASCII spaces from the given slice and return the
/// corresponding sub-slice.
fn trim_ascii_prefix<'s>(&self, slice: &'s [u8]) -> &'s [u8] {
trim_ascii_prefix(self.searcher.line_terminator(), slice)
}
}
#[cfg(test)]
@@ -1987,6 +2018,31 @@ Watson
assert_eq_printed!(expected, got);
}
#[test]
fn trim_ascii_with_line_term() {
let matcher = RegexMatcher::new("Watson").unwrap();
let mut printer = StandardBuilder::new()
.trim_ascii(true)
.build(NoColor::new(vec![]));
SearcherBuilder::new()
.line_number(true)
.before_context(1)
.build()
.search_reader(
&matcher,
"\n Watson".as_bytes(),
printer.sink(&matcher),
)
.unwrap();
let got = printer_contents(&mut printer);
let expected = "\
1-
2:Watson
";
assert_eq_printed!(expected, got);
}
#[test]
fn line_number() {
let matcher = RegexMatcher::new("Watson").unwrap();

View File

@@ -190,6 +190,8 @@ impl SummaryBuilder {
/// A [`UserColorSpec`](struct.UserColorSpec.html) can be constructed from
/// a string in accordance with the color specification format. See the
/// `UserColorSpec` type documentation for more details on the format.
/// A [`ColorSpecs`](struct.ColorSpecs.html) can then be generated from
/// zero or more `UserColorSpec`s.
///
/// Regardless of the color specifications provided here, whether color
/// is actually used or not is determined by the implementation of

View File

@@ -4,7 +4,7 @@ use std::io;
use std::path::Path;
use std::time;
use grep_matcher::{Captures, Match, Matcher};
use grep_matcher::{Captures, LineTerminator, Match, Matcher};
use grep_searcher::{
LineIter,
SinkError, SinkContext, SinkContextKind, SinkMatch,
@@ -157,6 +157,7 @@ pub struct Sunk<'a> {
}
impl<'a> Sunk<'a> {
#[inline]
pub fn empty() -> Sunk<'static> {
Sunk {
bytes: &[],
@@ -168,6 +169,7 @@ impl<'a> Sunk<'a> {
}
}
#[inline]
pub fn from_sink_match(
sunk: &'a SinkMatch<'a>,
original_matches: &'a [Match],
@@ -186,6 +188,7 @@ impl<'a> Sunk<'a> {
}
}
#[inline]
pub fn from_sink_context(
sunk: &'a SinkContext<'a>,
original_matches: &'a [Match],
@@ -204,30 +207,37 @@ impl<'a> Sunk<'a> {
}
}
#[inline]
pub fn context_kind(&self) -> Option<&'a SinkContextKind> {
self.context_kind
}
#[inline]
pub fn bytes(&self) -> &'a [u8] {
self.bytes
}
#[inline]
pub fn matches(&self) -> &'a [Match] {
self.matches
}
#[inline]
pub fn original_matches(&self) -> &'a [Match] {
self.original_matches
}
#[inline]
pub fn lines(&self, line_term: u8) -> LineIter<'a> {
LineIter::new(line_term, self.bytes())
}
#[inline]
pub fn absolute_byte_offset(&self) -> u64 {
self.absolute_byte_offset
}
#[inline]
pub fn line_number(&self) -> Option<u64> {
self.line_number
}
@@ -317,7 +327,7 @@ pub struct NiceDuration(pub time::Duration);
impl fmt::Display for NiceDuration {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{:0.4}s", self.fractional_seconds())
write!(f, "{:0.6}s", self.fractional_seconds())
}
}
@@ -346,21 +356,37 @@ impl Serialize for NiceDuration {
/// Trim prefix ASCII spaces from the given slice and return the corresponding
/// range.
pub fn trim_ascii_prefix_range(slice: &[u8], range: Match) -> Match {
fn is_space(b: &&u8) -> bool {
match **b {
///
/// This stops trimming a prefix as soon as it sees non-whitespace or a line
/// terminator.
pub fn trim_ascii_prefix_range(
line_term: LineTerminator,
slice: &[u8],
range: Match,
) -> Match {
fn is_space(b: u8) -> bool {
match b {
b'\t' | b'\n' | b'\x0B' | b'\x0C' | b'\r' | b' ' => true,
_ => false,
}
}
let count = slice[range].iter().take_while(is_space).count();
let count = slice[range]
.iter()
.take_while(|&&b| -> bool {
is_space(b) && !line_term.as_bytes().contains(&b)
})
.count();
range.with_start(range.start() + count)
}
/// Trim prefix ASCII spaces from the given slice and return the corresponding
/// sub-slice.
pub fn trim_ascii_prefix(slice: &[u8]) -> &[u8] {
let range = trim_ascii_prefix_range(slice, Match::new(0, slice.len()));
pub fn trim_ascii_prefix(line_term: LineTerminator, slice: &[u8]) -> &[u8] {
let range = trim_ascii_prefix_range(
line_term,
slice,
Match::new(0, slice.len()),
);
&slice[range]
}

View File

@@ -1,6 +1,6 @@
[package]
name = "grep-regex"
version = "0.0.1" #:version
version = "0.1.0" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
Use Rust's regex library with the 'grep' crate.
@@ -14,8 +14,8 @@ license = "Unlicense/MIT"
[dependencies]
log = "0.4"
grep-matcher = { version = "0.0.1", path = "../grep-matcher" }
grep-matcher = { version = "0.1.0", path = "../grep-matcher" }
regex = "1"
regex-syntax = "0.6"
thread_local = "0.3.5"
thread_local = "0.3.6"
utf8-ranges = "1"

View File

@@ -1,4 +1,35 @@
grep
----
This is a *library* that provides grep-style line-by-line regex searching (with
comparable performance to `grep` itself).
grep-regex
----------
The `grep-regex` crate provides an implementation of the `Matcher` trait from
the `grep-matcher` crate. This implementation permits Rust's regex engine to
be used in the `grep` crate for fast line oriented searching.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/grep-regex.svg)](https://crates.io/crates/grep-regex)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/grep-regex](https://docs.rs/grep-regex)
**NOTE:** You probably don't want to use this crate directly. Instead, you
should prefer the facade defined in the
[`grep`](https://docs.rs/grep)
crate.
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
grep-regex = "0.1"
```
and this to your crate root:
```rust
extern crate grep_regex;
```

View File

@@ -201,21 +201,6 @@ impl ConfiguredHIR {
}
match LiteralSets::new(&self.expr).one_regex() {
None => Ok(None),
/*
if !self.config.crlf {
return Ok(None);
}
// If we're trying to support CRLF, then our "fast" line
// oriented regex needs `$` to be able to match at a `\r\n`
// boundary. The regex engine doesn't support this, so we
// "fake" it by replacing `$` with `(?:\r?$)`. Since the
// fast line regex is only used to detect lines, this never
// infects match offsets. Namely, the regex generated via
// `self.expr` is matched against lines with line terminators
// stripped.
let pattern = crlfify(self.expr.clone()).to_string();
self.pattern_to_regex(&pattern).map(Some)
*/
Some(pattern) => self.pattern_to_regex(&pattern).map(Some),
}
}

View File

@@ -263,7 +263,7 @@ impl RegexMatcherBuilder {
/// be slightly different than what one would expect given the pattern.
/// This is the trade off made: in many cases, `$` will "just work" in the
/// presence of `\r\n` line terminators, but matches may require some
/// trimming to faithfully represent the indended match.
/// trimming to faithfully represent the intended match.
///
/// Note that if you do not wish to set the line terminator but would still
/// like `$` to match `\r\n` line terminators, then it is valid to call

View File

@@ -1,6 +1,6 @@
[package]
name = "grep-searcher"
version = "0.0.1" #:version
version = "0.1.0" #:version
authors = ["Andrew Gallant <jamslam@gmail.com>"]
description = """
Fast line oriented regex searching as a library.
@@ -15,14 +15,14 @@ license = "Unlicense/MIT"
[dependencies]
bytecount = "0.3.1"
encoding_rs = "0.8"
encoding_rs_io = "0.1"
grep-matcher = { version = "0.0.1", path = "../grep-matcher" }
encoding_rs_io = "0.1.2"
grep-matcher = { version = "0.1.0", path = "../grep-matcher" }
log = "0.4"
memchr = "2"
memmap = "0.6"
[dev-dependencies]
grep-regex = { version = "0.0.1", path = "../grep-regex" }
grep-regex = { version = "0.1.0", path = "../grep-regex" }
regex = "1"
[features]

View File

@@ -1,4 +1,37 @@
grep
----
This is a *library* that provides grep-style line-by-line regex searching (with
comparable performance to `grep` itself).
grep-searcher
-------------
A high level library for executing fast line oriented searches. This handles
things like reporting contextual lines, counting lines, inverting a search,
detecting binary data, automatic UTF-16 transcoding and deciding whether or not
to use memory maps.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/grep-searcher.svg)](https://crates.io/crates/grep-searcher)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/grep-searcher](https://docs.rs/grep-searcher)
**NOTE:** You probably don't want to use this crate directly. Instead, you
should prefer the facade defined in the
[`grep`](https://docs.rs/grep)
crate.
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
grep-searcher = "0.1"
```
and this to your crate root:
```rust
extern crate grep_searcher;
```

View File

@@ -294,8 +294,8 @@ pub struct LineBuffer {
/// has been exhausted.
last_lineterm: usize,
/// The end position of the buffer. This is always greater than or equal to
/// lastnl. The bytes between lastnl and end, if any, always correspond to
/// a partial line.
/// last_lineterm. The bytes between last_lineterm and end, if any, always
/// correspond to a partial line.
end: usize,
/// The absolute byte offset corresponding to `pos`. This is most typically
/// not a valid index into addressable memory, but rather, an offset that
@@ -475,8 +475,26 @@ impl LineBuffer {
// in bounds, which they should always be, and we enforce with
// an assert above.
//
// TODO: It seems like it should be possible to do this in safe
// code that results in the same codegen.
// It seems like it should be possible to do this in safe code that
// results in the same codegen. I tried the obvious:
//
// for (src, dst) in (self.pos..self.end).zip(0..) {
// self.buf[dst] = self.buf[src];
// }
//
// But the above does not work, and in fact compiles down to a slow
// byte-by-byte loop. I tried a few other minor variations, but
// alas, better minds might prevail.
//
// Overall, this doesn't save us *too* much. It mostly matters when
// the number of bytes we're copying is large, which can happen
// if the searcher is asked to produce a lot of context. We could
// decide this isn't worth it, but it does make an appreciable
// impact at or around the context=30 range on my machine.
//
// We could also use a temporary buffer that compiles down to two
// memcpys and is faster than the byte-at-a-time loop, but it
// complicates our options for limiting memory allocation a bit.
ptr::copy(
self.buf[self.pos..].as_ptr(),
self.buf.as_mut_ptr(),
@@ -485,7 +503,7 @@ impl LineBuffer {
}
self.pos = 0;
self.last_lineterm = roll_len;
self.end = self.last_lineterm;
self.end = roll_len;
}
/// Ensures that the internal buffer has a non-zero amount of free space

View File

@@ -72,7 +72,18 @@ impl LineStep {
///
/// The range returned includes the line terminator. Ranges are always
/// non-empty.
pub fn next(&mut self, mut bytes: &[u8]) -> Option<(usize, usize)> {
pub fn next(&mut self, bytes: &[u8]) -> Option<(usize, usize)> {
self.next_impl(bytes)
}
/// Like next, but returns a `Match` instead of a tuple.
#[inline(always)]
pub(crate) fn next_match(&mut self, bytes: &[u8]) -> Option<Match> {
self.next_impl(bytes).map(|(s, e)| Match::new(s, e))
}
#[inline(always)]
fn next_impl(&mut self, mut bytes: &[u8]) -> Option<(usize, usize)> {
bytes = &bytes[..self.end];
match memchr(self.line_term, &bytes[self.pos..]) {
None => {
@@ -95,11 +106,6 @@ impl LineStep {
}
}
}
/// Like next, but returns a `Match` instead of a tuple.
pub(crate) fn next_match(&mut self, bytes: &[u8]) -> Option<Match> {
self.next(bytes).map(|(s, e)| Match::new(s, e))
}
}
/// Count the number of occurrences of `line_term` in `bytes`.
@@ -109,9 +115,11 @@ pub fn count(bytes: &[u8], line_term: u8) -> u64 {
/// Given a line that possibly ends with a terminator, return that line without
/// the terminator.
#[inline(always)]
pub fn without_terminator(bytes: &[u8], line_term: LineTerminator) -> &[u8] {
let line_term = line_term.as_bytes();
if bytes.get(bytes.len().saturating_sub(line_term.len())..) == Some(line_term) {
let start = bytes.len().saturating_sub(line_term.len());
if bytes.get(start..) == Some(line_term) {
return &bytes[..bytes.len() - line_term.len()];
}
bytes
@@ -121,6 +129,7 @@ pub fn without_terminator(bytes: &[u8], line_term: LineTerminator) -> &[u8] {
/// of bytes.
///
/// Line terminators are considered part of the line they terminate.
#[inline(always)]
pub fn locate(
bytes: &[u8],
line_term: u8,
@@ -167,7 +176,7 @@ fn preceding_by_pos(
) -> usize {
if pos == 0 {
return 0;
} else if bytes[pos - 1] == b'\n' {
} else if bytes[pos - 1] == line_term {
pos -= 1;
}
loop {

View File

@@ -290,11 +290,13 @@ impl<'s, M: Matcher, S: Sink> Core<'s, M, S> {
return Ok(false);
}
} else if let Some(line) = self.find_by_line_fast(buf)? {
if !self.after_context_by_line(buf, line.start())? {
return Ok(false);
}
if !self.before_context_by_line(buf, line.start())? {
return Ok(false);
if self.config.max_context() > 0 {
if !self.after_context_by_line(buf, line.start())? {
return Ok(false);
}
if !self.before_context_by_line(buf, line.start())? {
return Ok(false);
}
}
self.set_pos(line.end());
if !self.sink_matched(buf, &line)? {
@@ -311,6 +313,7 @@ impl<'s, M: Matcher, S: Sink> Core<'s, M, S> {
Ok(true)
}
#[inline(always)]
fn match_by_line_fast_invert(
&mut self,
buf: &[u8],
@@ -351,6 +354,7 @@ impl<'s, M: Matcher, S: Sink> Core<'s, M, S> {
Ok(true)
}
#[inline(always)]
fn find_by_line_fast(
&self,
buf: &[u8],
@@ -406,6 +410,7 @@ impl<'s, M: Matcher, S: Sink> Core<'s, M, S> {
Ok(None)
}
#[inline(always)]
fn sink_matched(
&mut self,
buf: &[u8],
@@ -419,11 +424,21 @@ impl<'s, M: Matcher, S: Sink> Core<'s, M, S> {
}
self.count_lines(buf, range.start());
let offset = self.absolute_byte_offset + range.start() as u64;
let linebuf =
if self.config.line_term.is_crlf() {
// Normally, a line terminator is never part of a match, but
// if the line terminator is CRLF, then it's possible for `\r`
// to end up in the match, which we generally don't want. So
// we strip it here.
lines::without_terminator(&buf[*range], self.config.line_term)
} else {
&buf[*range]
};
let keepgoing = self.sink.matched(
&self.searcher,
&SinkMatch {
line_term: self.config.line_term,
bytes: &buf[*range],
bytes: linebuf,
absolute_byte_offset: offset,
line_number: self.line_number,
},

View File

@@ -76,9 +76,9 @@ impl MmapChoice {
return None;
}
// SAFETY: This is acceptable because the only way `MmapChoiceImpl` can
// be `Auto` is if the caller invoked the `auto` constructor. Thus,
// this is a propagation of the caller's assertion that using memory
// maps is safe.
// be `Auto` is if the caller invoked the `auto` constructor, which
// is itself not safe. Thus, this is a propagation of the caller's
// assertion that using memory maps is safe.
match unsafe { Mmap::map(file) } {
Ok(mmap) => Some(mmap),
Err(err) => {

View File

@@ -296,7 +296,7 @@ impl SearcherBuilder {
}
}
/// Builder a searcher with the given matcher.
/// Build a searcher with the given matcher.
pub fn build(&self) -> Searcher {
let mut config = self.config.clone();
if config.passthru {
@@ -306,7 +306,8 @@ impl SearcherBuilder {
let mut decode_builder = DecodeReaderBytesBuilder::new();
decode_builder
.encoding(self.config.encoding.as_ref().map(|e| e.0))
.utf8_passthru(true);
.utf8_passthru(true)
.bom_override(true);
Searcher {
config: config,
decode_builder: decode_builder,
@@ -318,7 +319,7 @@ impl SearcherBuilder {
/// Set the line terminator that is used by the searcher.
///
/// When building a searcher, if the matcher provided has a line terminator
/// When using a searcher, if the matcher provided has a line terminator
/// set, then it must be the same as this one. If they aren't, building
/// a searcher will return an error.
///
@@ -453,12 +454,25 @@ impl SearcherBuilder {
/// enabled, then the entire contents will be read on to the heap before
/// searching begins.
///
/// The default behavior is **never**. Generally speaking, command line
/// programs probably want to enable memory maps. The only reason to keep
/// memory maps disabled is if there are concerns using them. For example,
/// if your process is searching a file backed memory map at the same time
/// that file is truncated, then it's possible for the process to terminate
/// with a bus error.
/// The default behavior is **never**. Generally speaking, and perhaps
/// against conventional wisdom, memory maps don't necessarily enable
/// faster searching. For example, depending on the platform, using memory
/// maps while searching a large directory can actually be quite a bit
/// slower than using normal read calls because of the overhead of managing
/// the memory maps.
///
/// Memory maps can be faster in some cases however. On some platforms,
/// when searching a very large file that *is already in memory*, it can
/// be slightly faster to search it as a memory map instead of using
/// normal read calls.
///
/// Finally, memory maps have a somewhat complicated safety story in Rust.
/// If you aren't sure whether enabling memory maps is worth it, then just
/// don't bother with it.
///
/// **WARNING**: If your process is searching a file backed memory map
/// at the same time that file is truncated, then it's possible for the
/// process to terminate with a bus error.
pub fn memory_map(
&mut self,
strategy: MmapChoice,
@@ -486,7 +500,8 @@ impl SearcherBuilder {
/// Set the encoding used to read the source data before searching.
///
/// When an encoding is provided, then the source data is _unconditionally_
/// transcoded using the encoding. This will disable BOM sniffing. If the
/// transcoded using the encoding, unless a BOM is present. If a BOM is
/// present, then the encoding indicated by the BOM is used instead. If the
/// transcoding process encounters an error, then bytes are replaced with
/// the Unicode replacement codepoint.
///
@@ -732,6 +747,7 @@ impl Searcher {
/// where the output may be tailored based on how the searcher is configured.
impl Searcher {
/// Returns the line terminator used by this searcher.
#[inline]
pub fn line_terminator(&self) -> LineTerminator {
self.config.line_term
}
@@ -739,18 +755,21 @@ impl Searcher {
/// Returns true if and only if this searcher is configured to invert its
/// search results. That is, matching lines are lines that do **not** match
/// the searcher's matcher.
#[inline]
pub fn invert_match(&self) -> bool {
self.config.invert_match
}
/// Returns true if and only if this searcher is configured to count line
/// numbers.
#[inline]
pub fn line_number(&self) -> bool {
self.config.line_number
}
/// Returns true if and only if this searcher is configured to perform
/// multi line search.
#[inline]
pub fn multi_line(&self) -> bool {
self.config.multi_line
}
@@ -785,17 +804,20 @@ impl Searcher {
/// Returns the number of "after" context lines to report. When context
/// reporting is not enabled, this returns `0`.
#[inline]
pub fn after_context(&self) -> usize {
self.config.after_context
}
/// Returns the number of "before" context lines to report. When context
/// reporting is not enabled, this returns `0`.
#[inline]
pub fn before_context(&self) -> usize {
self.config.before_context
}
/// Returns true if and only if the searcher has "passthru" mode enabled.
#[inline]
pub fn passthru(&self) -> bool {
self.config.passthru
}

View File

@@ -69,7 +69,7 @@ impl SinkError for Box<::std::error::Error> {
/// an implementation of this trait to a searcher, and the searcher is then
/// responsible for calling the methods on this trait.
///
/// This trait defines five behaviors:
/// This trait defines several behaviors:
///
/// * What to do when a match is found. Callers must provide this.
/// * What to do when an error occurs. Callers must provide this via the

View File

@@ -13,11 +13,18 @@ keywords = ["regex", "grep", "egrep", "search", "pattern"]
license = "Unlicense/MIT"
[dependencies]
grep-matcher = { version = "0.0.1", path = "../grep-matcher" }
grep-printer = { version = "0.0.1", path = "../grep-printer" }
grep-regex = { version = "0.0.1", path = "../grep-regex" }
grep-searcher = { version = "0.0.1", path = "../grep-searcher" }
grep-matcher = { version = "0.1.0", path = "../grep-matcher" }
grep-pcre2 = { version = "0.1.0", path = "../grep-pcre2", optional = true }
grep-printer = { version = "0.1.0", path = "../grep-printer" }
grep-regex = { version = "0.1.0", path = "../grep-regex" }
grep-searcher = { version = "0.1.0", path = "../grep-searcher" }
[dev-dependencies]
atty = "0.2.11"
termcolor = "1"
walkdir = "2.2.0"
[features]
avx-accel = ["grep-searcher/avx-accel"]
simd-accel = ["grep-searcher/simd-accel"]
pcre2 = ["grep-pcre2"]

View File

@@ -1,4 +1,41 @@
grep
----
This is a *library* that provides grep-style line-by-line regex searching (with
comparable performance to `grep` itself).
ripgrep, as a library.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/grep.svg)](https://crates.io/crates/grep)
Dual-licensed under MIT or the [UNLICENSE](http://unlicense.org).
### Documentation
[https://docs.rs/grep](https://docs.rs/grep)
NOTE: This crate isn't ready for wide use yet. Ambitious individuals can
probably piece together the parts, but there is no high level documentation
describing how all of the pieces fit together.
### Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
grep = "0.2"
```
and this to your crate root:
```rust
extern crate grep;
```
### Features
This crate provides a `pcre2` feature (disabled by default) which, when
enabled, re-exports the `grep-pcre2` crate as an alternative `Matcher`
implementation to the standard `grep-regex` implementation.

107
grep/examples/simplegrep.rs Normal file
View File

@@ -0,0 +1,107 @@
extern crate atty;
extern crate grep;
extern crate termcolor;
extern crate walkdir;
use std::env;
use std::error;
use std::ffi::OsString;
use std::path::Path;
use std::process;
use std::result;
use grep::printer::{ColorSpecs, StandardBuilder};
use grep::regex::RegexMatcher;
use grep::searcher::{BinaryDetection, SearcherBuilder};
use termcolor::{ColorChoice, StandardStream};
use walkdir::WalkDir;
macro_rules! fail {
($($tt:tt)*) => {
return Err(From::from(format!($($tt)*)));
}
}
type Result<T> = result::Result<T, Box<error::Error>>;
fn main() {
if let Err(err) = try_main() {
eprintln!("{}", err);
process::exit(1);
}
}
fn try_main() -> Result<()> {
let mut args: Vec<OsString> = env::args_os().collect();
if args.len() < 2 {
fail!("Usage: simplegrep <pattern> [<path> ...]");
}
if args.len() == 2 {
args.push(OsString::from("./"));
}
let pattern = match args[1].clone().into_string() {
Ok(pattern) => pattern,
Err(_) => {
fail!(
"pattern is not valid UTF-8: '{:?}'",
args[1].to_string_lossy()
);
}
};
search(&pattern, &args[2..])
}
fn search(pattern: &str, paths: &[OsString]) -> Result<()> {
let matcher = RegexMatcher::new_line_matcher(&pattern)?;
let mut searcher = SearcherBuilder::new()
.binary_detection(BinaryDetection::quit(b'\x00'))
.build();
let mut printer = StandardBuilder::new()
.color_specs(colors())
.build(StandardStream::stdout(color_choice()));
for path in paths {
for result in WalkDir::new(path) {
let dent = match result {
Ok(dent) => dent,
Err(err) => {
eprintln!(
"{}: {}",
err.path().unwrap_or(Path::new("error")).display(),
err,
);
continue;
}
};
if !dent.file_type().is_file() {
continue;
}
let result = searcher.search_path(
&matcher,
dent.path(),
printer.sink_with_path(&matcher, dent.path()),
);
if let Err(err) = result {
eprintln!("{}: {}", dent.path().display(), err);
}
}
}
Ok(())
}
fn color_choice() -> ColorChoice {
if atty::is(atty::Stream::Stdout) {
ColorChoice::Auto
} else {
ColorChoice::Never
}
}
fn colors() -> ColorSpecs {
ColorSpecs::new(&[
"path:fg:magenta".parse().unwrap(),
"line:fg:green".parse().unwrap(),
"match:fg:red".parse().unwrap(),
"match:style:bold".parse().unwrap(),
])
}

View File

@@ -1,10 +1,22 @@
/*!
TODO.
ripgrep, as a library.
This library is intended to provide a high level facade to the crates that
make up ripgrep's core searching routines. However, there is no high level
documentation available yet guiding users on how to fit all of the pieces
together.
Every public API item in the constituent crates is documented, but examples
are sparse.
A cookbook and a guide are planned.
*/
#![deny(missing_docs)]
pub extern crate grep_matcher as matcher;
#[cfg(feature = "pcre2")]
pub extern crate grep_pcre2 as pcre2;
pub extern crate grep_printer as printer;
pub extern crate grep_regex as regex;
pub extern crate grep_searcher as searcher;

View File

@@ -26,7 +26,7 @@ memchr = "2"
regex = "1"
same-file = "1"
thread_local = "0.3.2"
walkdir = "2"
walkdir = "2.2.0"
[target.'cfg(windows)'.dependencies.winapi]
version = "0.3"

View File

@@ -4,7 +4,7 @@ The ignore crate provides a fast recursive directory iterator that respects
various filters such as globs, file types and `.gitignore` files. This crate
also provides lower level direct access to gitignore and file type matchers.
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.png)](https://travis-ci.org/BurntSushi/ripgrep)
[![Linux build status](https://api.travis-ci.org/BurntSushi/ripgrep.svg)](https://travis-ci.org/BurntSushi/ripgrep)
[![Windows build status](https://ci.appveyor.com/api/projects/status/github/BurntSushi/ripgrep?svg=true)](https://ci.appveyor.com/project/BurntSushi/ripgrep)
[![](https://img.shields.io/crates/v/ignore.svg)](https://crates.io/crates/ignore)

View File

@@ -1,5 +1,3 @@
#![allow(dead_code, unused_imports, unused_mut, unused_variables)]
extern crate crossbeam;
extern crate ignore;
extern crate walkdir;
@@ -8,7 +6,6 @@ use std::env;
use std::io::{self, Write};
use std::path::Path;
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::thread;
use crossbeam::sync::MsQueue;
@@ -48,13 +45,11 @@ fn main() {
})
});
} else if simple {
let mut stdout = io::BufWriter::new(io::stdout());
let walker = WalkDir::new(path);
for result in walker {
queue.push(Some(DirEntry::X(result.unwrap())));
}
} else {
let mut stdout = io::BufWriter::new(io::stdout());
let walker = WalkBuilder::new(path).build();
for result in walker {
queue.push(Some(DirEntry::Y(result.unwrap())));

View File

@@ -98,6 +98,7 @@ use {Error, Match};
const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("agda", &["*.agda", "*.lagda"]),
("ats", &["*.ats", "*.dats", "*.sats", "*.hats"]),
("aidl", &["*.aidl"]),
("amake", &["*.mk", "*.bp"]),
("asciidoc", &["*.adoc", "*.asc", "*.asciidoc"]),
@@ -107,7 +108,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("bazel", &["*.bzl", "WORKSPACE", "BUILD"]),
("bitbake", &["*.bb", "*.bbappend", "*.bbclass", "*.conf", "*.inc"]),
("bzip2", &["*.bz2"]),
("c", &["*.c", "*.h", "*.H"]),
("c", &["*.c", "*.h", "*.H", "*.cats"]),
("cabal", &["*.cabal"]),
("cbor", &["*.cbor"]),
("ceylon", &["*.ceylon"]),
@@ -129,6 +130,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("cython", &["*.pyx"]),
("dart", &["*.dart"]),
("d", &["*.d"]),
("dhall", &["*.dhall"]),
("docker", &["*Dockerfile*"]),
("elisp", &["*.el"]),
("elixir", &["*.ex", "*.eex", "*.exs"]),
@@ -147,9 +149,10 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("groovy", &["*.groovy", "*.gradle"]),
("h", &["*.h", "*.hpp"]),
("hbs", &["*.hbs"]),
("haskell", &["*.hs", "*.lhs"]),
("haskell", &["*.hs", "*.lhs", "*.cpphs", "*.c2hs", "*.hsc"]),
("hs", &["*.hs", "*.lhs"]),
("html", &["*.htm", "*.html", "*.ejs"]),
("idris", &["*.idr", "*.lidr"]),
("java", &["*.java", "*.jsp"]),
("jinja", &["*.j2", "*.jinja", "*.jinja2"]),
("js", &[
@@ -200,6 +203,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
"makefile", "Makefile",
"*.mk", "*.mak"
]),
("mako", &["*.mako", "*.mao"]),
("markdown", &["*.markdown", "*.md", "*.mdown", "*.mkdn"]),
("md", &["*.markdown", "*.md", "*.mdown", "*.mkdn"]),
("man", &["*.[0-9lnpx]", "*.[0-9][cEFMmpSx]"]),
@@ -232,7 +236,7 @@ const DEFAULT_TYPES: &'static [(&'static str, &'static [&'static str])] = &[
("ruby", &["Gemfile", "*.gemspec", ".irbrc", "Rakefile", "*.rb"]),
("rust", &["*.rs"]),
("sass", &["*.sass", "*.scss"]),
("scala", &["*.scala"]),
("scala", &["*.scala", "*.sbt"]),
("sh", &[
// Portable/misc. init files
".login", ".logout", ".profile", "profile",

View File

@@ -82,7 +82,34 @@ pub fn app() -> App<'static, 'static> {
/// the RIPGREP_BUILD_GIT_HASH env var is inspect for it. If that isn't set,
/// then a revision hash is not included in the version string returned.
pub fn long_version(revision_hash: Option<&str>) -> String {
// Let's say whether faster CPU instructions are enabled or not.
// Do we have a git hash?
// (Yes, if ripgrep was built on a machine with `git` installed.)
let hash = match revision_hash.or(option_env!("RIPGREP_BUILD_GIT_HASH")) {
None => String::new(),
Some(githash) => format!(" (rev {})", githash),
};
// Put everything together.
let runtime = runtime_cpu_features();
if runtime.is_empty() {
format!(
"{}{}\n{} (compiled)",
crate_version!(),
hash,
compile_cpu_features().join(" ")
)
} else {
format!(
"{}{}\n{} (compiled)\n{} (runtime)",
crate_version!(),
hash,
compile_cpu_features().join(" "),
runtime.join(" ")
)
}
}
/// Returns the relevant CPU features enabled at compile time.
fn compile_cpu_features() -> Vec<&'static str> {
let mut features = vec![];
if cfg!(feature = "simd-accel") {
features.push("+SIMD");
@@ -94,14 +121,33 @@ pub fn long_version(revision_hash: Option<&str>) -> String {
} else {
features.push("-AVX");
}
// Do we have a git hash?
// (Yes, if ripgrep was built on a machine with `git` installed.)
let hash = match revision_hash.or(option_env!("RIPGREP_BUILD_GIT_HASH")) {
None => String::new(),
Some(githash) => format!(" (rev {})", githash),
};
// Put everything together.
format!("{}{}\n{}", crate_version!(), hash, features.join(" "))
features
}
/// Returns the relevant CPU features enabled at runtime.
#[cfg(all(ripgrep_runtime_cpu, target_arch = "x86_64"))]
fn runtime_cpu_features() -> Vec<&'static str> {
// This is kind of a dirty violation of abstraction, since it assumes
// knowledge about what specific SIMD features are being used.
let mut features = vec![];
if is_x86_feature_detected!("ssse3") {
features.push("+SIMD");
} else {
features.push("-SIMD");
}
if is_x86_feature_detected!("avx2") {
features.push("+AVX");
} else {
features.push("-AVX");
}
features
}
/// Returns the relevant CPU features enabled at runtime.
#[cfg(not(all(ripgrep_runtime_cpu, target_arch = "x86_64")))]
fn runtime_cpu_features() -> Vec<&'static str> {
vec![]
}
/// Arg is a light alias for a clap::Arg that is specialized to compile time
@@ -502,6 +548,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_context_separator(&mut args);
flag_count(&mut args);
flag_count_matches(&mut args);
flag_crlf(&mut args);
flag_debug(&mut args);
flag_dfa_size_limit(&mut args);
flag_encoding(&mut args);
@@ -518,6 +565,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_ignore_case(&mut args);
flag_ignore_file(&mut args);
flag_invert_match(&mut args);
flag_json(&mut args);
flag_line_number(&mut args);
flag_line_regexp(&mut args);
flag_max_columns(&mut args);
@@ -526,6 +574,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_max_filesize(&mut args);
flag_mmap(&mut args);
flag_multiline(&mut args);
flag_multiline_dotall(&mut args);
flag_no_config(&mut args);
flag_no_ignore(&mut args);
flag_no_ignore_global(&mut args);
@@ -534,9 +583,12 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_no_ignore_vcs(&mut args);
flag_no_messages(&mut args);
flag_null(&mut args);
flag_null_data(&mut args);
flag_only_matching(&mut args);
flag_path_separator(&mut args);
flag_passthru(&mut args);
flag_pcre2(&mut args);
flag_pcre2_unicode(&mut args);
flag_pre(&mut args);
flag_pretty(&mut args);
flag_quiet(&mut args);
@@ -549,6 +601,7 @@ pub fn all_args_and_flags() -> Vec<RGArg> {
flag_stats(&mut args);
flag_text(&mut args);
flag_threads(&mut args);
flag_trim(&mut args);
flag_type(&mut args);
flag_type_add(&mut args);
flag_type_clear(&mut args);
@@ -810,6 +863,32 @@ This overrides the --count flag. Note that when --count is combined with
args.push(arg);
}
fn flag_crlf(args: &mut Vec<RGArg>) {
const SHORT: &str = "Support CRLF line terminators (useful on Windows).";
const LONG: &str = long!("\
When enabled, ripgrep will treat CRLF ('\\r\\n') as a line terminator instead
of just '\\n'.
Principally, this permits '$' in regex patterns to match just before CRLF
instead of just before LF. The underlying regex engine may not support this
natively, so ripgrep will translate all instances of '$' to '(?:\\r??$)'. This
may produce slightly different than desired match offsets. It is intended as a
work-around until the regex engine supports this natively.
CRLF support can be disabled with --no-crlf.
");
let arg = RGArg::switch("crlf")
.help(SHORT).long_help(LONG)
.overrides("no-crlf")
.overrides("null-data");
args.push(arg);
let arg = RGArg::switch("no-crlf")
.hidden()
.overrides("crlf");
args.push(arg);
}
fn flag_debug(args: &mut Vec<RGArg>) {
const SHORT: &str = "Show debug messages.";
const LONG: &str = long!("\
@@ -856,10 +935,17 @@ default value is 'auto', which will cause ripgrep to do a best effort automatic
detection of encoding on a per-file basis. Other supported values can be found
in the list of labels here:
https://encoding.spec.whatwg.org/#concept-encoding-get
This flag can be disabled with --no-encoding.
");
let arg = RGArg::flag("encoding", "ENCODING").short("E")
.help(SHORT).long_help(LONG);
args.push(arg);
let arg = RGArg::switch("no-encoding")
.hidden()
.overrides("encoding");
args.push(arg);
}
fn flag_file(args: &mut Vec<RGArg>) {
@@ -1085,6 +1171,66 @@ Invert matching. Show lines that do not match the given patterns.
args.push(arg);
}
fn flag_json(args: &mut Vec<RGArg>) {
const SHORT: &str = "Show search results in a JSON Lines format.";
const LONG: &str = long!("\
Enable printing results in a JSON Lines format.
When this flag is provided, ripgrep will emit a sequence of messages, each
encoded as a JSON object, where there are five different message types:
**begin** - A message that indicates a file is being searched and contains at
least one match.
**end** - A message the indicates a file is done being searched. This message
also include summary statistics about the search for a particular file.
**match** - A message that indicates a match was found. This includes the text
and offsets of the match.
**context** - A message that indicates a contextual line was found. This
includes the text of the line, along with any match information if the search
was inverted.
**summary** - The final message emitted by ripgrep that contains summary
statistics about the search across all files.
Since file paths or the contents of files are not guaranteed to be valid UTF-8
and JSON itself must be representable by a Unicode encoding, ripgrep will emit
all data elements as objects with one of two keys: 'text' or 'bytes'. 'text' is
a normal JSON string when the data is valid UTF-8 while 'bytes' is the base64
encoded contents of the data.
The JSON Lines format is only supported for showing search results. It cannot
be used with other flags that emit other types of output, such as --files,
--files-with-matches, --files-without-match, --count or --count-matches.
ripgrep will report an error if any of the aforementioned flags are used in
concert with --json.
Other flags that control aspects of the standard output such as
--only-matching, --heading, --replace, --max-columns, etc., have no effect
when --json is set.
A more complete description of the JSON format used can be found here:
https://docs.rs/grep-printer/*/grep_printer/struct.JSON.html
The JSON Lines format can be disabled with --no-json.
");
let arg = RGArg::switch("json")
.help(SHORT).long_help(LONG)
.overrides("no-json")
.conflicts(&[
"count", "count-matches",
"files", "files-with-matches", "files-without-match",
]);
args.push(arg);
let arg = RGArg::switch("no-json")
.hidden()
.overrides("json");
args.push(arg);
}
fn flag_line_number(args: &mut Vec<RGArg>) {
const SHORT: &str = "Show line numbers.";
const LONG: &str = long!("\
@@ -1217,9 +1363,41 @@ fn flag_multiline(args: &mut Vec<RGArg>) {
const LONG: &str = long!("\
Enable matching across multiple lines.
When multiline mode is enabled, ripgrep will lift the restriction that a match
cannot include a line terminator. For example, when multiline mode is not
enabled (the default), then the regex '\\p{any}' will match any Unicode
codepoint other than '\\n'. Similarly, the regex '\\n' is explicitly forbidden,
and if you try to use it, ripgrep will return an error. However, when multiline
mode is enabled, '\\p{any}' will match any Unicode codepoint, including '\\n',
and regexes like '\\n' are permitted.
An important caveat is that multiline mode does not change the match semantics
of '.'. Namely, in most regex matchers, a '.' will by default match any
character other than '\\n', and this is true in ripgrep as well. In order to
make '.' match '\\n', you must enable the \"dot all\" flag inside the regex.
For example, both '(?s).' and '(?s:.)' have the same semantics, where '.' will
match any character, including '\\n'. Alternatively, the '--multiline-dotall'
flag may be passed to make the \"dot all\" behavior the default. This flag only
applies when multiline search is enabled.
There is no limit on the number of the lines that a single match can span.
**WARNING**: Because of how the underlying regex engine works, multiline
searches may be slower than normal line-oriented searches, and they may also
use more memory. In particular, when multiline mode is enabled, ripgrep
requires that each file it searches is laid out contiguously in memory
(either by reading it onto the heap or by memory-mapping it). Things that
cannot be memory-mapped (such as stdin) will be consumed until EOF before
searching can begin. In general, ripgrep will only do these things when
necessary. Specifically, if the --multiline flag is provided but the regex
does not contain patterns that would match '\\n' characters, then ripgrep
will automatically avoid reading each file into memory before searching it.
Nevertheless, if you only care about matches spanning at most one line, then it
is always better to disable multiline mode.
This flag can be disabled with --no-multiline.
");
let arg = RGArg::switch("multiline")
let arg = RGArg::switch("multiline").short("U")
.help(SHORT).long_help(LONG)
.overrides("no-multiline");
args.push(arg);
@@ -1230,6 +1408,37 @@ This flag can be disabled with --no-multiline.
args.push(arg);
}
fn flag_multiline_dotall(args: &mut Vec<RGArg>) {
const SHORT: &str = "Make '.' match new lines when multiline is enabled.";
const LONG: &str = long!("\
This flag enables \"dot all\" in your regex pattern, which causes '.' to match
newlines when multiline searching is enabled. This flag has no effect if
multiline searching isn't enabled with the --multiline flag.
Normally, a '.' will match any character except newlines. While this behavior
typically isn't relevant for line-oriented matching (since matches can span at
most one line), this can be useful when searching with the -U/--multiline flag.
By default, the multiline mode runs without this flag.
This flag is generally intended to be used in an alias or your ripgrep config
file if you prefer \"dot all\" semantics by default. Note that regardless of
whether this flag is used, \"dot all\" semantics can still be controlled via
inline flags in the regex pattern itself, e.g., '(?s:.)' always enables \"dot
all\" whereas '(?-s:.)' always disables \"dot all\".
This flag can be disabled with --no-multiline-dotall.
");
let arg = RGArg::switch("multiline-dotall")
.help(SHORT).long_help(LONG)
.overrides("no-multiline-dotall");
args.push(arg);
let arg = RGArg::switch("no-multiline-dotall")
.hidden()
.overrides("multiline-dotall");
args.push(arg);
}
fn flag_no_config(args: &mut Vec<RGArg>) {
const SHORT: &str = "Never read configuration files.";
const LONG: &str = long!("\
@@ -1268,7 +1477,7 @@ fn flag_no_ignore_global(args: &mut Vec<RGArg>) {
const LONG: &str = long!("\
Don't respect ignore files that come from \"global\" sources such as git's
`core.excludesFile` configuration option (which defaults to
`$HOME/.config/git/ignore).
`$HOME/.config/git/ignore`).
This flag can be disabled with the --ignore-global flag.
");
@@ -1372,6 +1581,29 @@ for use with xargs.
args.push(arg);
}
fn flag_null_data(args: &mut Vec<RGArg>) {
const SHORT: &str = "Use NUL as a line terminator instead of \\n.";
const LONG: &str = long!("\
Enabling this option causes ripgrep to use NUL as a line terminator instead of
the default of '\\n'.
This is useful when searching large binary files that would otherwise have very
long lines if '\\n' were used as the line terminator. In particular, ripgrep
requires that, at a minimum, each line must fit into memory. Using NUL instead
can be a useful stopgap to keep memory requirements low and avoid OOM (out of
memory) conditions.
This is also useful for processing NUL delimited data, such as that emitted
when using ripgrep's -0/--null flag or find's --print0 flag.
Using this flag implies -a/--text.
");
let arg = RGArg::switch("null-data")
.help(SHORT).long_help(LONG)
.overrides("crlf");
args.push(arg);
}
fn flag_only_matching(args: &mut Vec<RGArg>) {
const SHORT: &str = "Print only matches parts of a line.";
const LONG: &str = long!("\
@@ -1413,6 +1645,72 @@ without needing to modify the pattern.
args.push(arg);
}
fn flag_pcre2(args: &mut Vec<RGArg>) {
const SHORT: &str = "Enable PCRE2 matching.";
const LONG: &str = long!("\
When this flag is present, ripgrep will use the PCRE2 regex engine instead of
its default regex engine.
This is generally useful when you want to use features such as look-around
or backreferences.
Note that PCRE2 is an optional ripgrep feature. If PCRE2 wasn't included in
your build of ripgrep, then using this flag will result in ripgrep printing
an error message and exiting.
This flag can be disabled with --no-pcre2.
");
let arg = RGArg::switch("pcre2").short("P")
.help(SHORT).long_help(LONG)
.overrides("no-pcre2");
args.push(arg);
let arg = RGArg::switch("no-pcre2")
.hidden()
.overrides("pcre2");
args.push(arg);
}
fn flag_pcre2_unicode(args: &mut Vec<RGArg>) {
const SHORT: &str = "Enable Unicode mode for PCRE2 matching.";
const LONG: &str = long!("\
When PCRE2 matching is enabled, this flag will enable Unicode mode. If PCRE2
matching is not enabled, then this flag has no effect.
This flag is enabled by default when PCRE2 matching is enabled.
When PCRE2's Unicode mode is enabled several different types of patterns become
Unicode aware. This includes '\\b', '\\B', '\\w', '\\W', '\\d', '\\D', '\\s'
and '\\S'. Similarly, the '.' meta character will match any Unicode codepoint
instead of any byte. Caseless matching will also use Unicode simple case
folding instead of ASCII-only case insensitivity.
Unicode mode in PCRE2 represents a critical trade off in the user experience
of ripgrep. In particular, unlike the default regex engine, PCRE2 does not
support the ability to search possibly invalid UTF-8 with Unicode features
enabled. Instead, PCRE2 *requires* that everything it searches when Unicode
mode is enabled is valid UTF-8. (Or valid UTF-16/UTF-32, but for the purposes
of ripgrep, we only discuss UTF-8.) This means that if you have PCRE2's Unicode
mode enabled and you attempt to search invalid UTF-8, then the search for that
file will halt and print an error. For this reason, when PCRE2's Unicode mode
is enabled, ripgrep will automatically \"fix\" invalid UTF-8 sequences by
replacing them with the Unicode replacement codepoint.
If you would rather see the encoding errors surfaced by PCRE2 when Unicode mode
is enabled, then pass the --no-encoding flag to disable all transcoding.
This flag can be disabled with --no-pcre2-unicode.
");
let arg = RGArg::switch("pcre2-unicode")
.help(SHORT).long_help(LONG);
args.push(arg);
let arg = RGArg::switch("no-pcre2-unicode")
.hidden()
.overrides("pcre2-unicode");
args.push(arg);
}
fn flag_pretty(args: &mut Vec<RGArg>) {
const SHORT: &str = "Alias for --color always --heading --line-number.";
const LONG: &str = long!("\
@@ -1621,11 +1919,18 @@ searched, and the time taken for the entire search to complete.
This set of aggregate statistics may expand over time.
Note that this flag has no effect if --files, --files-with-matches or
--files-without-match is passed.");
--files-without-match is passed.
This flag can be disabled with --no-stats.
");
let arg = RGArg::switch("stats")
.help(SHORT).long_help(LONG);
.help(SHORT).long_help(LONG)
.overrides("no-stats");
args.push(arg);
let arg = RGArg::switch("no-stats")
.hidden()
.overrides("stats");
args.push(arg);
}
@@ -1668,6 +1973,25 @@ causes ripgrep to choose the thread count using heuristics.
args.push(arg);
}
fn flag_trim(args: &mut Vec<RGArg>) {
const SHORT: &str = "Trim prefixed whitespace from matches.";
const LONG: &str = long!("\
When set, all ASCII whitespace at the beginning of each line printed will be
trimmed.
This flag can be disabled with --no-trim.
");
let arg = RGArg::switch("trim")
.help(SHORT).long_help(LONG)
.overrides("no-trim");
args.push(arg);
let arg = RGArg::switch("no-trim")
.hidden()
.overrides("trim");
args.push(arg);
}
fn flag_type(args: &mut Vec<RGArg>) {
const SHORT: &str = "Only search files matching TYPE.";
const LONG: &str = long!("\

View File

@@ -9,8 +9,10 @@ use std::sync::Arc;
use atty;
use clap;
use grep::matcher::LineTerminator;
use grep::searcher::{
BinaryDetection, Encoding, MmapChoice, Searcher, SearcherBuilder,
#[cfg(feature = "pcre2")]
use grep::pcre2::{
RegexMatcher as PCRE2RegexMatcher,
RegexMatcherBuilder as PCRE2RegexMatcherBuilder,
};
use grep::printer::{
ColorSpecs, Stats,
@@ -18,7 +20,13 @@ use grep::printer::{
Standard, StandardBuilder,
Summary, SummaryBuilder, SummaryKind,
};
use grep::regex::{RegexMatcher, RegexMatcherBuilder};
use grep::regex::{
RegexMatcher as RustRegexMatcher,
RegexMatcherBuilder as RustRegexMatcherBuilder,
};
use grep::searcher::{
BinaryDetection, Encoding, MmapChoice, Searcher, SearcherBuilder,
};
use ignore::overrides::{Override, OverrideBuilder};
use ignore::types::{FileTypeDef, Types, TypesBuilder};
use ignore::{Walk, WalkBuilder, WalkParallel};
@@ -275,6 +283,7 @@ impl Args {
let searcher = self.matches().searcher(self.paths())?;
let mut builder = SearchWorkerBuilder::new();
builder
.json_stats(self.matches().is_present("json"))
.preprocessor(self.matches().preprocessor())
.search_zip(self.matches().is_present("search-zip"));
Ok(builder.build(matcher, searcher, printer))
@@ -418,7 +427,33 @@ impl ArgMatches {
///
/// If there was a problem building the matcher (e.g., a syntax error),
/// then this returns an error.
#[cfg(feature = "pcre2")]
fn matcher(&self, patterns: &[String]) -> Result<PatternMatcher> {
if self.is_present("pcre2") {
let matcher = self.matcher_pcre2(patterns)?;
Ok(PatternMatcher::PCRE2(matcher))
} else {
let matcher = match self.matcher_rust(patterns) {
Ok(matcher) => matcher,
Err(err) => {
return Err(From::from(suggest_pcre2(err.to_string())));
}
};
Ok(PatternMatcher::RustRegex(matcher))
}
}
/// Return the matcher that should be used for searching.
///
/// If there was a problem building the matcher (e.g., a syntax error),
/// then this returns an error.
#[cfg(not(feature = "pcre2"))]
fn matcher(&self, patterns: &[String]) -> Result<PatternMatcher> {
if self.is_present("pcre2") {
return Err(From::from(
"PCRE2 is not available in this build of ripgrep",
));
}
let matcher = self.matcher_rust(patterns)?;
Ok(PatternMatcher::RustRegex(matcher))
}
@@ -427,18 +462,37 @@ impl ArgMatches {
///
/// If there was a problem building the matcher (such as a regex syntax
/// error), then an error is returned.
fn matcher_rust(&self, patterns: &[String]) -> Result<RegexMatcher> {
let mut builder = RegexMatcherBuilder::new();
fn matcher_rust(&self, patterns: &[String]) -> Result<RustRegexMatcher> {
let mut builder = RustRegexMatcherBuilder::new();
builder
.case_smart(self.case_smart())
.case_insensitive(self.case_insensitive())
.multi_line(true)
.dot_matches_new_line(false)
.unicode(true)
.octal(false)
.word(self.is_present("word-regexp"));
if !self.is_present("multiline") {
builder.line_terminator(Some(b'\n'));
if self.is_present("multiline") {
builder.dot_matches_new_line(self.is_present("multiline-dotall"));
if self.is_present("crlf") {
builder
.crlf(true)
.line_terminator(None);
}
} else {
builder
.line_terminator(Some(b'\n'))
.dot_matches_new_line(false);
if self.is_present("crlf") {
builder.crlf(true);
}
// We don't need to set this in multiline mode since mulitline
// matchers don't use optimizations related to line terminators.
// Moreover, a mulitline regex used with --null-data should
// be allowed to match NUL bytes explicitly, which this would
// otherwise forbid.
if self.is_present("null-data") {
builder.line_terminator(Some(b'\x00'));
}
}
if let Some(limit) = self.regex_size_limit()? {
builder.size_limit(limit);
@@ -449,6 +503,43 @@ impl ArgMatches {
Ok(builder.build(&patterns.join("|"))?)
}
/// Build a matcher using PCRE2.
///
/// If there was a problem building the matcher (such as a regex syntax
/// error), then an error is returned.
#[cfg(feature = "pcre2")]
fn matcher_pcre2(&self, patterns: &[String]) -> Result<PCRE2RegexMatcher> {
let mut builder = PCRE2RegexMatcherBuilder::new();
builder
.case_smart(self.case_smart())
.caseless(self.case_insensitive())
.multi_line(true)
.word(self.is_present("word-regexp"));
// For whatever reason, the JIT craps out during compilation with a
// "no more memory" error on 32 bit systems. So don't use it there.
if !cfg!(target_pointer_width = "32") {
builder.jit(true);
}
if self.pcre2_unicode() {
builder.utf(true).ucp(true);
if self.encoding()?.is_some() {
// SAFETY: If an encoding was specified, then we're guaranteed
// to get valid UTF-8, so we can disable PCRE2's UTF checking.
// (Feeding invalid UTF-8 to PCRE2 is UB.)
unsafe {
builder.disable_utf_check();
}
}
}
if self.is_present("multiline") {
builder.dotall(self.is_present("multiline-dotall"));
}
if self.is_present("crlf") {
builder.crlf(true);
}
Ok(builder.build(&patterns.join("|"))?)
}
/// Build a JSON printer that writes results to the given writer.
fn printer_json<W: io::Write>(&self, wtr: W) -> Result<JSON<W>> {
let mut builder = JSONBuilder::new();
@@ -490,7 +581,7 @@ impl ArgMatches {
.max_matches(self.max_count()?)
.column(self.column())
.byte_offset(self.is_present("byte-offset"))
.trim_ascii(false)
.trim_ascii(self.is_present("trim"))
.separator_search(None)
.separator_context(Some(self.context_separator()))
.separator_field_match(b":".to_vec())
@@ -529,9 +620,17 @@ impl ArgMatches {
/// Build a searcher from the command line parameters.
fn searcher(&self, paths: &[PathBuf]) -> Result<Searcher> {
let (ctx_before, ctx_after) = self.contexts()?;
let line_term =
if self.is_present("crlf") {
LineTerminator::crlf()
} else if self.is_present("null-data") {
LineTerminator::byte(b'\x00')
} else {
LineTerminator::byte(b'\n')
};
let mut builder = SearcherBuilder::new();
builder
.line_terminator(LineTerminator::byte(b'\n'))
.line_terminator(line_term)
.invert_match(self.is_present("invert-match"))
.line_number(self.line_number(paths))
.multi_line(self.is_present("multiline"))
@@ -592,7 +691,11 @@ impl ArgMatches {
impl ArgMatches {
/// Returns the form of binary detection to perform.
fn binary_detection(&self) -> BinaryDetection {
if self.is_present("text") || self.unrestricted_count() >= 3 {
let none =
self.is_present("text")
|| self.unrestricted_count() >= 3
|| self.is_present("null-data");
if none {
BinaryDetection::none()
} else {
BinaryDetection::quit(b'\x00')
@@ -735,7 +838,11 @@ impl ArgMatches {
/// encoding is present, the Searcher will still do BOM sniffing for UTF-16
/// and transcode seamlessly.
fn encoding(&self) -> Result<Option<Encoding>> {
if self.is_present("no-encoding") {
return Ok(None);
}
let label = match self.value_of_lossy("encoding") {
None if self.pcre2_unicode() => "utf-8".to_string(),
None => return Ok(None),
Some(label) => label,
};
@@ -762,7 +869,6 @@ impl ArgMatches {
})
}
/// Returns true if and only if matches should be grouped with file name
/// headings.
fn heading(&self) -> bool {
@@ -813,6 +919,9 @@ impl ArgMatches {
if self.is_present("no-line-number") {
return false;
}
if self.output_kind() == OutputKind::JSON {
return true;
}
// A few things can imply counting line numbers. In particular, we
// generally want to show line numbers by default when printing to a
@@ -851,7 +960,7 @@ impl ArgMatches {
// in a data structure that depends on immutability. Generally
// speaking, the worst thing that can happen is a SIGBUS (if the
// underlying file is truncated while reading it), which will cause
// ripgrep to abort.
// ripgrep to abort. This reasoning should be treated as suspect.
let maybe = unsafe { MmapChoice::auto() };
let never = MmapChoice::never();
if self.is_present("no-mmap") {
@@ -889,13 +998,21 @@ impl ArgMatches {
/// Determine the type of output we should produce.
fn output_kind(&self) -> OutputKind {
if self.is_present("quiet") {
// While we don't technically print results (or aggregate results)
// in quiet mode, we still support the --stats flag, and those
// stats are computed by the Summary printer for now.
return OutputKind::Summary;
} else if self.is_present("json") {
return OutputKind::JSON;
}
let (count, count_matches) = self.counts();
let summary =
count
|| count_matches
|| self.is_present("files-with-matches")
|| self.is_present("files-without-match")
|| self.is_present("quiet");
|| self.is_present("files-without-match");
if summary {
OutputKind::Summary
} else {
@@ -1206,6 +1323,13 @@ impl ArgMatches {
self.occurrences_of("unrestricted")
}
/// Returns true if and only if PCRE2's Unicode mode should be enabled.
fn pcre2_unicode(&self) -> bool {
// PCRE2 Unicode is enabled by default, so only disable it when told
// to do so explicitly.
self.is_present("pcre2") && !self.is_present("no-pcre2-unicode")
}
/// Returns true if and only if file names containing each match should
/// be emitted.
fn with_filename(&self, paths: &[PathBuf]) -> bool {
@@ -1344,6 +1468,21 @@ fn pattern_to_str(s: &OsStr) -> Result<&str> {
})
}
/// Inspect an error resulting from building a Rust regex matcher, and if it's
/// believed to correspond to a syntax error that PCRE2 could handle, then
/// add a message to suggest the use of -P/--pcre2.
#[cfg(feature = "pcre2")]
fn suggest_pcre2(msg: String) -> String {
if !msg.contains("backreferences") && !msg.contains("look-around") {
msg
} else {
format!("{}
Consider enabling PCRE2 with the --pcre2 flag, which can handle backreferences
and look-around.", msg)
}
}
/// Convert the result of parsing a human readable file size to a `usize`,
/// failing if the type does not fit.
fn u64_to_usize(
@@ -1378,7 +1517,17 @@ fn stdin_is_readable() -> bool {
/// Returns true if and only if stdin is deemed searchable.
#[cfg(windows)]
fn stdin_is_readable() -> bool {
// On Windows, it's not clear what the possibilities are to me, so just
// always return true.
true
use std::os::windows::io::AsRawHandle;
use winapi::um::fileapi::GetFileType;
use winapi::um::winbase::{FILE_TYPE_DISK, FILE_TYPE_PIPE};
let handle = match Handle::stdin() {
Err(_) => return false,
Ok(handle) => handle,
};
let raw_handle = handle.as_raw_handle();
// SAFETY: As far as I can tell, it's not possible to use GetFileType in
// a way that violates safety. We give it a handle and we get an integer.
let ft = unsafe { GetFileType(raw_handle) };
ft == FILE_TYPE_DISK || ft == FILE_TYPE_PIPE
}

View File

@@ -11,6 +11,8 @@ extern crate log;
extern crate num_cpus;
extern crate regex;
extern crate same_file;
#[macro_use]
extern crate serde_json;
extern crate termcolor;
#[cfg(windows)]
extern crate winapi;
@@ -39,10 +41,10 @@ mod search;
mod subject;
mod unescape;
pub type Result<T> = ::std::result::Result<T, Box<::std::error::Error>>;
type Result<T> = ::std::result::Result<T, Box<::std::error::Error>>;
pub fn main() {
match Args::parse().and_then(run) {
fn main() {
match Args::parse().and_then(try_main) {
Ok(true) => process::exit(0),
Ok(false) => process::exit(1),
Err(err) => {
@@ -52,7 +54,7 @@ pub fn main() {
}
}
fn run(args: Args) -> Result<bool> {
fn try_main(args: Args) -> Result<bool> {
use args::Command::*;
match args.command()? {
@@ -103,7 +105,7 @@ fn search(args: Args) -> Result<bool> {
if let Some(ref stats) = stats {
let elapsed = Instant::now().duration_since(started_at);
// We don't care if we couldn't print this successfully.
let _ = searcher.printer().print_stats(elapsed, stats);
let _ = searcher.print_stats(elapsed, stats);
}
Ok(matched)
}
@@ -181,7 +183,7 @@ fn search_parallel(args: Args) -> Result<bool> {
let stats = locked_stats.lock().unwrap();
let mut searcher = args.search_worker(args.stdout())?;
// We don't care if we couldn't print this successfully.
let _ = searcher.printer().print_stats(elapsed, &stats);
let _ = searcher.print_stats(elapsed, &stats);
}
Ok(matched.load(SeqCst))
}

View File

@@ -3,9 +3,12 @@ use std::path::{Path, PathBuf};
use std::time::Duration;
use grep::matcher::Matcher;
#[cfg(feature = "pcre2")]
use grep::pcre2::{RegexMatcher as PCRE2RegexMatcher};
use grep::printer::{JSON, Standard, Summary, Stats};
use grep::regex::RegexMatcher;
use grep::regex::{RegexMatcher as RustRegexMatcher};
use grep::searcher::Searcher;
use serde_json as json;
use termcolor::WriteColor;
use decompressor::{DecompressionReader, is_compressed};
@@ -17,6 +20,7 @@ use subject::Subject;
/// at a very high level.
#[derive(Clone, Debug)]
struct Config {
json_stats: bool,
preprocessor: Option<PathBuf>,
search_zip: bool,
}
@@ -24,6 +28,7 @@ struct Config {
impl Default for Config {
fn default() -> Config {
Config {
json_stats: false,
preprocessor: None,
search_zip: false,
}
@@ -60,6 +65,18 @@ impl SearchWorkerBuilder {
SearchWorker { config, matcher, searcher, printer }
}
/// Forcefully use JSON to emit statistics, even if the underlying printer
/// is not the JSON printer.
///
/// This is useful for implementing flag combinations like
/// `--json --quiet`, which uses the summary printer for implementing
/// `--quiet` but still wants to emit summary statistics, which should
/// be JSON formatted because of the `--json` flag.
pub fn json_stats(&mut self, yes: bool) -> &mut SearchWorkerBuilder {
self.config.json_stats = yes;
self
}
/// Set the path to a preprocessor command.
///
/// When this is set, instead of searching files directly, the given
@@ -116,7 +133,9 @@ impl SearchResult {
/// The pattern matcher used by a search worker.
#[derive(Clone, Debug)]
pub enum PatternMatcher {
RustRegex(RegexMatcher),
RustRegex(RustRegexMatcher),
#[cfg(feature = "pcre2")]
PCRE2(PCRE2RegexMatcher),
}
/// The printer used by a search worker.
@@ -134,19 +153,15 @@ pub enum Printer<W> {
}
impl<W: WriteColor> Printer<W> {
/// Print the given statistics to the underlying writer in a way that is
/// consistent with this printer's format.
///
/// While `Stats` contains a duration itself, this only corresponds to the
/// time spent searching, where as `total_duration` should roughly
/// approximate the lifespan of the ripgrep process itself.
pub fn print_stats(
fn print_stats(
&mut self,
total_duration: Duration,
stats: &Stats,
) -> io::Result<()> {
match *self {
Printer::JSON(_) => unimplemented!(),
Printer::JSON(_) => {
self.print_stats_json(total_duration, stats)
}
Printer::Standard(_) | Printer::Summary(_) => {
self.print_stats_human(total_duration, stats)
}
@@ -167,8 +182,8 @@ impl<W: WriteColor> Printer<W> {
{searches} files searched
{bytes_printed} bytes printed
{bytes_searched} bytes searched
{search_time:.6} seconds spent searching
{process_time:.6} seconds
{search_time:0.6} seconds spent searching
{process_time:0.6} seconds
",
matches = stats.matches(),
lines = stats.matched_lines(),
@@ -181,6 +196,29 @@ impl<W: WriteColor> Printer<W> {
)
}
fn print_stats_json(
&mut self,
total_duration: Duration,
stats: &Stats,
) -> io::Result<()> {
// We specifically match the format laid out by the JSON printer in
// the grep-printer crate. We simply "extend" it with the 'summary'
// message type.
let fractional = fractional_seconds(total_duration);
json::to_writer(self.get_mut(), &json!({
"type": "summary",
"data": {
"stats": stats,
"elapsed_total": {
"secs": total_duration.as_secs(),
"nanos": total_duration.subsec_nanos(),
"human": format!("{:0.6}s", fractional),
},
}
}))?;
write!(self.get_mut(), "\n")
}
/// Return a mutable reference to the underlying printer's writer.
pub fn get_mut(&mut self) -> &mut W {
match *self {
@@ -215,6 +253,24 @@ impl<W: WriteColor> SearchWorker<W> {
&mut self.printer
}
/// Print the given statistics to the underlying writer in a way that is
/// consistent with this searcher's printer's format.
///
/// While `Stats` contains a duration itself, this only corresponds to the
/// time spent searching, where as `total_duration` should roughly
/// approximate the lifespan of the ripgrep process itself.
pub fn print_stats(
&mut self,
total_duration: Duration,
stats: &Stats,
) -> io::Result<()> {
if self.config.json_stats {
self.printer().print_stats_json(total_duration, stats)
} else {
self.printer().print_stats(total_duration, stats)
}
}
/// Search the given subject using the appropriate strategy.
fn search_impl(&mut self, subject: &Subject) -> io::Result<SearchResult> {
let path = subject.path();
@@ -243,6 +299,8 @@ impl<W: WriteColor> SearchWorker<W> {
let (searcher, printer) = (&mut self.searcher, &mut self.printer);
match self.matcher {
RustRegex(ref m) => search_path(m, searcher, printer, path),
#[cfg(feature = "pcre2")]
PCRE2(ref m) => search_path(m, searcher, printer, path),
}
}
@@ -265,6 +323,8 @@ impl<W: WriteColor> SearchWorker<W> {
let (searcher, printer) = (&mut self.searcher, &mut self.printer);
match self.matcher {
RustRegex(ref m) => search_reader(m, searcher, printer, path, rdr),
#[cfg(feature = "pcre2")]
PCRE2(ref m) => search_reader(m, searcher, printer, path, rdr),
}
}
}

View File

@@ -83,7 +83,6 @@ impl SubjectBuilder {
return None;
}
Err(err) => {
message!("{}: {}", subj.dent.path().display(), err);
debug!(
"ignoring {}: got error: {}",
subj.dent.path().display(), err

631
tests/feature.rs Normal file
View File

@@ -0,0 +1,631 @@
use hay::{SHERLOCK, SHERLOCK_CRLF};
use util::{Dir, TestCommand, sort_lines};
// See: https://github.com/BurntSushi/ripgrep/issues/1
rgtest!(f1_sjis, |dir: Dir, mut cmd: TestCommand| {
dir.create_bytes(
"foo",
b"\x84Y\x84u\x84\x82\x84|\x84\x80\x84{ \x84V\x84\x80\x84|\x84}\x84\x83"
);
cmd.arg("-Esjis").arg("Шерлок Холмс");
eqnice!("foo:Шерлок Холмс\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/1
rgtest!(f1_utf16_auto, |dir: Dir, mut cmd: TestCommand| {
dir.create_bytes(
"foo",
b"\xff\xfe(\x045\x04@\x04;\x04>\x04:\x04 \x00%\x04>\x04;\x04<\x04A\x04"
);
cmd.arg("Шерлок Холмс");
eqnice!("foo:Шерлок Холмс\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/1
rgtest!(f1_utf16_explicit, |dir: Dir, mut cmd: TestCommand| {
dir.create_bytes(
"foo",
b"\xff\xfe(\x045\x04@\x04;\x04>\x04:\x04 \x00%\x04>\x04;\x04<\x04A\x04"
);
cmd.arg("-Eutf-16le").arg("Шерлок Холмс");
eqnice!("foo:Шерлок Холмс\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/1
rgtest!(f1_eucjp, |dir: Dir, mut cmd: TestCommand| {
dir.create_bytes(
"foo",
b"\xa7\xba\xa7\xd6\xa7\xe2\xa7\xdd\xa7\xe0\xa7\xdc \xa7\xb7\xa7\xe0\xa7\xdd\xa7\xde\xa7\xe3"
);
cmd.arg("-Eeuc-jp").arg("Шерлок Холмс");
eqnice!("foo:Шерлок Холмс\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/1
rgtest!(f1_unknown_encoding, |_: Dir, mut cmd: TestCommand| {
cmd.arg("-Efoobar").assert_non_empty_stderr();
});
// See: https://github.com/BurntSushi/ripgrep/issues/1
rgtest!(f1_replacement_encoding, |_: Dir, mut cmd: TestCommand| {
cmd.arg("-Ecsiso2022kr").assert_non_empty_stderr();
});
// See: https://github.com/BurntSushi/ripgrep/issues/7
rgtest!(f7, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("pat", "Sherlock\nHolmes");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
Holmeses, success in the province of detective work must always
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.arg("-fpat").arg("sherlock").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/7
rgtest!(f7_stdin, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.arg("-f-").pipe("Sherlock"));
});
// See: https://github.com/BurntSushi/ripgrep/issues/20
rgtest!(f20_no_filename, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--no-filename");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.arg("--no-filename").arg("Sherlock").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/34
rgtest!(f34_only_matching, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
sherlock:Sherlock
sherlock:Sherlock
";
eqnice!(expected, cmd.arg("-o").arg("Sherlock").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/34
rgtest!(f34_only_matching_line_column, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
sherlock:1:57:Sherlock
sherlock:3:49:Sherlock
";
cmd.arg("-o").arg("--column").arg("-n").arg("Sherlock");
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/45
rgtest!(f45_relative_cwd, |dir: Dir, mut cmd: TestCommand| {
dir.create(".not-an-ignore", "foo\n/bar");
dir.create_dir("bar");
dir.create_dir("baz/bar");
dir.create_dir("baz/baz/bar");
dir.create("bar/test", "test");
dir.create("baz/bar/test", "test");
dir.create("baz/baz/bar/test", "test");
dir.create("baz/foo", "test");
dir.create("baz/test", "test");
dir.create("foo", "test");
dir.create("test", "test");
cmd.arg("-l").arg("test");
// First, get a baseline without applying ignore rules.
let expected = "
bar/test
baz/bar/test
baz/baz/bar/test
baz/foo
baz/test
foo
test
";
eqnice!(sort_lines(expected), sort_lines(&cmd.stdout()));
// Now try again with the ignore file activated.
cmd.arg("--ignore-file").arg(".not-an-ignore");
let expected = "
baz/bar/test
baz/baz/bar/test
baz/test
test
";
eqnice!(sort_lines(expected), sort_lines(&cmd.stdout()));
// Now do it again, but inside the baz directory. Since the ignore file
// is interpreted relative to the CWD, this will cause the /bar anchored
// pattern to filter out baz/bar, which is a subtle difference between true
// parent ignore files and manually specified ignore files.
let mut cmd = dir.command();
cmd.args(&["--ignore-file", "../.not-an-ignore", "-l", "test"]);
cmd.current_dir(dir.path().join("baz"));
let expected = "
baz/bar/test
test
";
eqnice!(sort_lines(expected), sort_lines(&cmd.stdout()));
});
// See: https://github.com/BurntSushi/ripgrep/issues/45
rgtest!(f45_precedence_with_others, |dir: Dir, mut cmd: TestCommand| {
dir.create(".not-an-ignore", "*.log");
dir.create(".ignore", "!imp.log");
dir.create("imp.log", "test");
dir.create("wat.log", "test");
cmd.arg("--ignore-file").arg(".not-an-ignore").arg("test");
eqnice!("imp.log:test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/45
rgtest!(f45_precedence_internal, |dir: Dir, mut cmd: TestCommand| {
dir.create(".not-an-ignore1", "*.log");
dir.create(".not-an-ignore2", "!imp.log");
dir.create("imp.log", "test");
dir.create("wat.log", "test");
cmd.args(&[
"--ignore-file", ".not-an-ignore1",
"--ignore-file", ".not-an-ignore2",
"test",
]);
eqnice!("imp.log:test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/68
rgtest!(f68_no_ignore_vcs, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "foo");
dir.create(".ignore", "bar");
dir.create("foo", "test");
dir.create("bar", "test");
eqnice!("foo:test\n", cmd.arg("--no-ignore-vcs").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/70
rgtest!(f70_smart_case, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.arg("-S").arg("sherlock").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/89
rgtest!(f89_files_with_matches, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--null").arg("--files-with-matches").arg("Sherlock");
eqnice!("sherlock\x00", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/89
rgtest!(f89_files_without_match, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "foo");
cmd.arg("--null").arg("--files-without-match").arg("Sherlock");
eqnice!("file.py\x00", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/89
rgtest!(f89_count, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--null").arg("--count").arg("Sherlock");
eqnice!("sherlock\x002\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/89
rgtest!(f89_files, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
eqnice!("sherlock\x00", cmd.arg("--null").arg("--files").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/89
rgtest!(f89_match, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
sherlock\x00For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock\x00Holmeses, success in the province of detective work must always
sherlock\x00be, to a very large extent, the result of luck. Sherlock Holmes
sherlock\x00can extract a clew from a wisp of straw or a flake of cigar ash;
";
eqnice!(expected, cmd.arg("--null").arg("-C1").arg("Sherlock").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/109
rgtest!(f109_max_depth, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("one");
dir.create("one/pass", "far");
dir.create_dir("one/too");
dir.create("one/too/many", "far");
cmd.arg("--maxdepth").arg("2").arg("far");
eqnice!("one/pass:far\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/124
rgtest!(f109_case_sensitive_part1, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "tEsT");
cmd.arg("--smart-case").arg("--case-sensitive").arg("test").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/124
rgtest!(f109_case_sensitive_part2, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "tEsT");
cmd.arg("--ignore-case").arg("--case-sensitive").arg("test").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/129
rgtest!(f129_matches, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test\ntest abcdefghijklmnopqrstuvwxyz test");
let expected = "foo:test\nfoo:[Omitted long matching line]\n";
eqnice!(expected, cmd.arg("-M26").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/129
rgtest!(f129_context, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test\nabcdefghijklmnopqrstuvwxyz");
let expected = "foo:test\nfoo-[Omitted long context line]\n";
eqnice!(expected, cmd.arg("-M20").arg("-C1").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/129
rgtest!(f129_replace, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test\ntest abcdefghijklmnopqrstuvwxyz test");
let expected = "foo:foo\nfoo:[Omitted long line with 2 matches]\n";
eqnice!(expected, cmd.arg("-M26").arg("-rfoo").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/159
rgtest!(f159_max_count, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test\ntest");
eqnice!("foo:test\n", cmd.arg("-m1").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/159
rgtest!(f159_max_count_zero, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test\ntest");
cmd.arg("-m0").arg("test").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/196
rgtest!(f196_persistent_config, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("sherlock").arg("sherlock");
// Make sure we get no matches by default.
cmd.assert_err();
// Now add our config file, and make sure it impacts ripgrep.
dir.create(".ripgreprc", "--ignore-case");
cmd.cmd().env("RIPGREP_CONFIG_PATH", ".ripgreprc");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/243
rgtest!(f243_column_line, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test");
eqnice!("foo:1:1:test\n", cmd.arg("--column").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/263
rgtest!(f263_sort_files, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test");
dir.create("abc", "test");
dir.create("zoo", "test");
dir.create("bar", "test");
let expected = "abc:test\nbar:test\nfoo:test\nzoo:test\n";
eqnice!(expected, cmd.arg("--sort-files").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/275
rgtest!(f275_pathsep, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("foo");
dir.create("foo/bar", "test");
cmd.arg("test").arg("--path-separator").arg("Z");
eqnice!("fooZbar:test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/362
rgtest!(f362_dfa_size_limit, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
// This should fall back to the nfa engine but should still produce the
// expected result.
cmd.arg("--dfa-size-limit").arg("10").arg(r"For\s").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/362
rgtest!(f362_exceeds_regex_size_limit, |dir: Dir, mut cmd: TestCommand| {
// --regex-size-limit doesn't apply to PCRE2.
if dir.is_pcre2() {
return;
}
cmd.arg("--regex-size-limit").arg("10K").arg(r"[0-9]\w+").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/362
#[cfg(target_pointer_width = "32")]
rgtest!(f362_u64_to_narrow_usize_overflow, |dir: Dir, mut cmd: TestCommand| {
// --dfa-size-limit doesn't apply to PCRE2.
if dir.is_pcre2() {
return;
}
dir.create_size("foo", 1000000);
// 2^35 * 2^20 is ok for u64, but not for usize
cmd.arg("--dfa-size-limit").arg("34359738368M").arg("--files");
cmd.assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/411
rgtest!(f411_single_threaded_search_stats, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let lines = cmd.arg("--stats").arg("Sherlock").stdout();
assert!(lines.contains("2 matched lines"));
assert!(lines.contains("1 files contained matches"));
assert!(lines.contains("1 files searched"));
assert!(lines.contains("seconds"));
});
rgtest!(f411_parallel_search_stats, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock_1", SHERLOCK);
dir.create("sherlock_2", SHERLOCK);
let lines = cmd.arg("--stats").arg("Sherlock").stdout();
assert!(lines.contains("4 matched lines"));
assert!(lines.contains("2 files contained matches"));
assert!(lines.contains("2 files searched"));
assert!(lines.contains("seconds"));
});
// See: https://github.com/BurntSushi/ripgrep/issues/416
rgtest!(f416_crlf, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK_CRLF);
cmd.arg("--crlf").arg(r"Sherlock$").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock\r
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/416
rgtest!(f416_crlf_multiline, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK_CRLF);
cmd.arg("--crlf").arg("-U").arg(r"Sherlock$").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock\r
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/416
rgtest!(f416_crlf_only_matching, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK_CRLF);
cmd.arg("--crlf").arg("-o").arg(r"Sherlock$").arg("sherlock");
let expected = "\
Sherlock\r
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/419
rgtest!(f419_zero_as_shortcut_for_null, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-0").arg("--count").arg("Sherlock");
eqnice!("sherlock\x002\n", cmd.stdout());
});
rgtest!(f740_passthru, |dir: Dir, mut cmd: TestCommand| {
dir.create("file", "\nfoo\nbar\nfoobar\n\nbaz\n");
dir.create("patterns", "foo\nbar\n");
// We can't assume that the way colour specs are translated to ANSI
// sequences will remain stable, and --replace doesn't currently work with
// pass-through, so for now we don't actually test the match sub-strings
let common_args = &["-n", "--passthru"];
let foo_expected = "\
1-
2:foo
3-bar
4:foobar
5-
6-baz
";
// With single pattern
cmd.args(common_args).arg("foo").arg("file");
eqnice!(foo_expected, cmd.stdout());
let foo_bar_expected = "\
1-
2:foo
3:bar
4:foobar
5-
6-baz
";
// With multiple -e patterns
let mut cmd = dir.command();
cmd.args(common_args);
cmd.args(&["-e", "foo", "-e", "bar", "file"]);
eqnice!(foo_bar_expected, cmd.stdout());
// With multiple -f patterns
let mut cmd = dir.command();
cmd.args(common_args);
cmd.args(&["-f", "patterns", "file"]);
eqnice!(foo_bar_expected, cmd.stdout());
// -c should override
let mut cmd = dir.command();
cmd.args(common_args);
cmd.args(&["-c", "foo", "file"]);
eqnice!("2\n", cmd.stdout());
let only_foo_expected = "\
1-
2:foo
3-bar
4:foo
5-
6-baz
";
// -o should work
let mut cmd = dir.command();
cmd.args(common_args);
cmd.args(&["-o", "foo", "file"]);
eqnice!(only_foo_expected, cmd.stdout());
let replace_foo_expected = "\
1-
2:wat
3-bar
4:watbar
5-
6-baz
";
// -r should work
let mut cmd = dir.command();
cmd.args(common_args);
cmd.args(&["-r", "wat", "foo", "file"]);
eqnice!(replace_foo_expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/948
rgtest!(f948_exit_code_match, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg(".");
cmd.assert_exit_code(0);
});
// See: https://github.com/BurntSushi/ripgrep/issues/948
rgtest!(f948_exit_code_no_match, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("NADA");
cmd.assert_exit_code(1);
});
// See: https://github.com/BurntSushi/ripgrep/issues/948
rgtest!(f948_exit_code_error, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("*");
cmd.assert_exit_code(2);
});
// See: https://github.com/BurntSushi/ripgrep/issues/917
rgtest!(f917_trim, |dir: Dir, mut cmd: TestCommand| {
const SHERLOCK: &'static str = "\
zzz
For the Doctor Watsons of this world, as opposed to the Sherlock
Holmeses, success in the province of detective work must always
\tbe, to a very large extent, the result of luck. Sherlock Holmes
can extract a clew from a wisp of straw or a flake of cigar ash;
but Doctor Watson has to have it taken out for him and dusted,
and exhibited clearly, with a label attached.
";
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-n", "-B1", "-A2", "--trim", "Holmeses", "sherlock",
]);
let expected = "\
2-For the Doctor Watsons of this world, as opposed to the Sherlock
3:Holmeses, success in the province of detective work must always
4-be, to a very large extent, the result of luck. Sherlock Holmes
5-can extract a clew from a wisp of straw or a flake of cigar ash;
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/917
//
// This is like f917_trim, except this tests that trimming occurs even when the
// whitespace is part of a match.
rgtest!(f917_trim_match, |dir: Dir, mut cmd: TestCommand| {
const SHERLOCK: &'static str = "\
zzz
For the Doctor Watsons of this world, as opposed to the Sherlock
Holmeses, success in the province of detective work must always
\tbe, to a very large extent, the result of luck. Sherlock Holmes
can extract a clew from a wisp of straw or a flake of cigar ash;
but Doctor Watson has to have it taken out for him and dusted,
and exhibited clearly, with a label attached.
";
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-n", "-B1", "-A2", "--trim", r"\s+Holmeses", "sherlock",
]);
let expected = "\
2-For the Doctor Watsons of this world, as opposed to the Sherlock
3:Holmeses, success in the province of detective work must always
4-be, to a very large extent, the result of luck. Sherlock Holmes
5-can extract a clew from a wisp of straw or a flake of cigar ash;
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/993
rgtest!(f993_null_data, |dir: Dir, mut cmd: TestCommand| {
dir.create("test", "foo\x00bar\x00\x00\x00baz\x00");
cmd.arg("--null-data").arg(r".+").arg("test");
// If we just used -a instead of --null-data, then the result would include
// all NUL bytes.
let expected = "foo\x00bar\x00baz\x00";
eqnice!(expected, cmd.stdout());
});

View File

@@ -7,18 +7,11 @@ but Doctor Watson has to have it taken out for him and dusted,
and exhibited clearly, with a label attached.
";
pub const CODE: &'static str = "\
extern crate snap;
use std::io;
fn main() {
let stdin = io::stdin();
let stdout = io::stdout();
// Wrap the stdin reader in a Snappy reader.
let mut rdr = snap::Reader::new(stdin.lock());
let mut wtr = stdout.lock();
io::copy(&mut rdr, &mut wtr).expect(\"I/O operation failed\");
}
pub const SHERLOCK_CRLF: &'static str = "\
For the Doctor Watsons of this world, as opposed to the Sherlock\r
Holmeses, success in the province of detective work must always\r
be, to a very large extent, the result of luck. Sherlock Holmes\r
can extract a clew from a wisp of straw or a flake of cigar ash;\r
but Doctor Watson has to have it taken out for him and dusted,\r
and exhibited clearly, with a label attached.\r
";

263
tests/json.rs Normal file
View File

@@ -0,0 +1,263 @@
use std::time;
use serde_json as json;
use hay::{SHERLOCK, SHERLOCK_CRLF};
use util::{Dir, TestCommand};
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[serde(tag = "type", content = "data")]
#[serde(rename_all = "snake_case")]
enum Message {
Begin(Begin),
End(End),
Match(Match),
Context(Context),
Summary(Summary),
}
impl Message {
fn unwrap_begin(&self) -> Begin {
match *self {
Message::Begin(ref x) => x.clone(),
ref x => panic!("expected Message::Begin but got {:?}", x),
}
}
fn unwrap_end(&self) -> End {
match *self {
Message::End(ref x) => x.clone(),
ref x => panic!("expected Message::End but got {:?}", x),
}
}
fn unwrap_match(&self) -> Match {
match *self {
Message::Match(ref x) => x.clone(),
ref x => panic!("expected Message::Match but got {:?}", x),
}
}
fn unwrap_context(&self) -> Context {
match *self {
Message::Context(ref x) => x.clone(),
ref x => panic!("expected Message::Context but got {:?}", x),
}
}
fn unwrap_summary(&self) -> Summary {
match *self {
Message::Summary(ref x) => x.clone(),
ref x => panic!("expected Message::Summary but got {:?}", x),
}
}
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct Begin {
path: Option<Data>,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct End {
path: Option<Data>,
binary_offset: Option<u64>,
stats: Stats,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct Summary {
elapsed_total: Duration,
stats: Stats,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct Match {
path: Option<Data>,
lines: Data,
line_number: Option<u64>,
absolute_offset: u64,
submatches: Vec<SubMatch>,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct Context {
path: Option<Data>,
lines: Data,
line_number: Option<u64>,
absolute_offset: u64,
submatches: Vec<SubMatch>,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct SubMatch {
#[serde(rename = "match")]
m: Data,
start: usize,
end: usize,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[serde(untagged)]
enum Data {
Text { text: String },
// This variant is used when the data isn't valid UTF-8. The bytes are
// base64 encoded, so using a String here is OK.
Bytes { bytes: String },
}
impl Data {
fn text(s: &str) -> Data { Data::Text { text: s.to_string() } }
fn bytes(s: &str) -> Data { Data::Bytes { bytes: s.to_string() } }
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct Stats {
elapsed: Duration,
searches: u64,
searches_with_match: u64,
bytes_searched: u64,
bytes_printed: u64,
matched_lines: u64,
matches: u64,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
struct Duration {
#[serde(flatten)]
duration: time::Duration,
human: String,
}
/// Decode JSON Lines into a Vec<Message>. If there was an error decoding,
/// this function panics.
fn json_decode(jsonlines: &str) -> Vec<Message> {
json::Deserializer::from_str(jsonlines)
.into_iter()
.collect::<Result<Vec<Message>, _>>()
.unwrap()
}
rgtest!(basic, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--json").arg("-B1").arg("Sherlock Holmes").arg("sherlock");
let msgs = json_decode(&cmd.stdout());
assert_eq!(
msgs[0].unwrap_begin(),
Begin { path: Some(Data::text("sherlock")) }
);
assert_eq!(
msgs[1].unwrap_context(),
Context {
path: Some(Data::text("sherlock")),
lines: Data::text("Holmeses, success in the province of detective work must always\n"),
line_number: Some(2),
absolute_offset: 65,
submatches: vec![],
}
);
assert_eq!(
msgs[2].unwrap_match(),
Match {
path: Some(Data::text("sherlock")),
lines: Data::text("be, to a very large extent, the result of luck. Sherlock Holmes\n"),
line_number: Some(3),
absolute_offset: 129,
submatches: vec![
SubMatch {
m: Data::text("Sherlock Holmes"),
start: 48,
end: 63,
},
],
}
);
assert_eq!(
msgs[3].unwrap_end().path,
Some(Data::text("sherlock"))
);
assert_eq!(
msgs[3].unwrap_end().binary_offset,
None
);
assert_eq!(
msgs[4].unwrap_summary().stats.searches_with_match,
1
);
assert_eq!(
msgs[4].unwrap_summary().stats.bytes_printed,
494
);
});
#[cfg(unix)]
rgtest!(notutf8, |dir: Dir, mut cmd: TestCommand| {
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
// This test does not work with PCRE2 because PCRE2 does not support the
// `u` flag.
if dir.is_pcre2() {
return;
}
// macOS doesn't like this either... sigh.
if cfg!(target_os = "macos") {
return;
}
let name = &b"foo\xFFbar"[..];
let contents = &b"quux\xFFbaz"[..];
// APFS does not support creating files with invalid UTF-8 bytes, so just
// skip the test if we can't create our file.
if !dir.try_create_bytes(OsStr::from_bytes(name), contents).is_ok() {
return;
}
cmd.arg("--json").arg(r"(?-u)\xFF");
let msgs = json_decode(&cmd.stdout());
assert_eq!(
msgs[0].unwrap_begin(),
Begin { path: Some(Data::bytes("Zm9v/2Jhcg==")) }
);
assert_eq!(
msgs[1].unwrap_match(),
Match {
path: Some(Data::bytes("Zm9v/2Jhcg==")),
lines: Data::bytes("cXV1eP9iYXo="),
line_number: Some(1),
absolute_offset: 0,
submatches: vec![
SubMatch {
m: Data::bytes("/w=="),
start: 4,
end: 5,
},
],
}
);
});
// See: https://github.com/BurntSushi/ripgrep/issues/416
//
// This test in particular checks that our match does _not_ include the `\r`
// even though the '$' may be rewritten as '(?:\r??$)' and could thus include
// `\r` in the match.
rgtest!(crlf, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK_CRLF);
cmd.arg("--json").arg("--crlf").arg(r"Sherlock$").arg("sherlock");
let msgs = json_decode(&cmd.stdout());
assert_eq!(
msgs[1].unwrap_match().submatches[0].clone(),
SubMatch {
m: Data::text("Sherlock"),
start: 56,
end: 64,
},
);
});

61
tests/macros.rs Normal file
View File

@@ -0,0 +1,61 @@
#[macro_export]
macro_rules! rgtest {
($name:ident, $fun:expr) => {
#[test]
fn $name() {
let (dir, cmd) = ::util::setup(stringify!($name));
$fun(dir, cmd);
if cfg!(feature = "pcre2") {
let (dir, cmd) = ::util::setup_pcre2(stringify!($name));
$fun(dir, cmd);
}
}
}
}
#[macro_export]
macro_rules! eqnice {
($expected:expr, $got:expr) => {
let expected = &*$expected;
let got = &*$got;
if expected != got {
panic!("
printed outputs differ!
expected:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
got:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
", expected, got);
}
}
}
#[macro_export]
macro_rules! eqnice_repr {
($expected:expr, $got:expr) => {
let expected = &*$expected;
let got = &*$got;
if expected != got {
panic!("
printed outputs differ!
expected:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{:?}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
got:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{:?}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
", expected, got);
}
}
}

948
tests/misc.rs Normal file
View File

@@ -0,0 +1,948 @@
use hay::SHERLOCK;
use util::{Dir, TestCommand, cmd_exists, sort_lines};
// This file contains "miscellaneous" tests that were either written before
// features were tracked more explicitly, or were simply written without
// linking them to a specific issue number. We should try to minimize the
// addition of more tests in this file and instead add them to either the
// regression test suite or the feature test suite (found in regression.rs and
// feature.rs, respectively).
rgtest!(single_file, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.arg("Sherlock").arg("sherlock").stdout());
});
rgtest!(dir, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.arg("Sherlock").stdout());
});
rgtest!(line_numbers, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
1:For the Doctor Watsons of this world, as opposed to the Sherlock
3:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.arg("-n").arg("Sherlock").arg("sherlock").stdout());
});
rgtest!(columns, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--column").arg("Sherlock").arg("sherlock");
let expected = "\
1:57:For the Doctor Watsons of this world, as opposed to the Sherlock
3:49:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(with_filename, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-H").arg("Sherlock").arg("sherlock");
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(with_heading, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
// This forces the issue since --with-filename is disabled by default
// when searching one file.
"--with-filename", "--heading",
"Sherlock", "sherlock",
]);
let expected = "\
sherlock
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(with_heading_default, |dir: Dir, mut cmd: TestCommand| {
// Search two or more and get --with-filename enabled by default.
// Use -j1 to get deterministic results.
dir.create("sherlock", SHERLOCK);
dir.create("foo", "Sherlock Holmes lives on Baker Street.");
cmd.arg("-j1").arg("--heading").arg("Sherlock");
let expected = "\
foo
Sherlock Holmes lives on Baker Street.
sherlock
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(sort_lines(expected), sort_lines(&cmd.stdout()));
});
rgtest!(inverted, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-v").arg("Sherlock").arg("sherlock");
let expected = "\
Holmeses, success in the province of detective work must always
can extract a clew from a wisp of straw or a flake of cigar ash;
but Doctor Watson has to have it taken out for him and dusted,
and exhibited clearly, with a label attached.
";
eqnice!(expected, cmd.stdout());
});
rgtest!(inverted_line_numbers, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-n").arg("-v").arg("Sherlock").arg("sherlock");
let expected = "\
2:Holmeses, success in the province of detective work must always
4:can extract a clew from a wisp of straw or a flake of cigar ash;
5:but Doctor Watson has to have it taken out for him and dusted,
6:and exhibited clearly, with a label attached.
";
eqnice!(expected, cmd.stdout());
});
rgtest!(case_insensitive, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-i").arg("sherlock").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(word, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-w").arg("as").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
";
eqnice!(expected, cmd.stdout());
});
rgtest!(line, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-x",
"Watson|and exhibited clearly, with a label attached.",
"sherlock",
]);
let expected = "\
and exhibited clearly, with a label attached.
";
eqnice!(expected, cmd.stdout());
});
rgtest!(literal, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file", "blib\n()\nblab\n");
cmd.arg("-F").arg("()").arg("file");
eqnice!("()\n", cmd.stdout());
});
rgtest!(quiet, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-q").arg("Sherlock").arg("sherlock");
assert!(cmd.stdout().is_empty());
});
rgtest!(replace, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-r").arg("FooBar").arg("Sherlock").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the FooBar
be, to a very large extent, the result of luck. FooBar Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(replace_groups, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-r", "$2, $1", "([A-Z][a-z]+) ([A-Z][a-z]+)", "sherlock",
]);
let expected = "\
For the Watsons, Doctor of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Holmes, Sherlock
but Watson, Doctor has to have it taken out for him and dusted,
";
eqnice!(expected, cmd.stdout());
});
rgtest!(replace_named_groups, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-r", "$last, $first",
"(?P<first>[A-Z][a-z]+) (?P<last>[A-Z][a-z]+)",
"sherlock",
]);
let expected = "\
For the Watsons, Doctor of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Holmes, Sherlock
but Watson, Doctor has to have it taken out for him and dusted,
";
eqnice!(expected, cmd.stdout());
});
rgtest!(replace_with_only_matching, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-o").arg("-r").arg("$1").arg(r"of (\w+)").arg("sherlock");
let expected = "\
this
detective
luck
straw
cigar
";
eqnice!(expected, cmd.stdout());
});
rgtest!(file_types, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "Sherlock");
dir.create("file.rs", "Sherlock");
cmd.arg("-t").arg("rust").arg("Sherlock");
eqnice!("file.rs:Sherlock\n", cmd.stdout());
});
rgtest!(file_types_all, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "Sherlock");
cmd.arg("-t").arg("all").arg("Sherlock");
eqnice!("file.py:Sherlock\n", cmd.stdout());
});
rgtest!(file_types_negate, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.remove("sherlock");
dir.create("file.py", "Sherlock");
dir.create("file.rs", "Sherlock");
cmd.arg("-T").arg("rust").arg("Sherlock");
eqnice!("file.py:Sherlock\n", cmd.stdout());
});
rgtest!(file_types_negate_all, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "Sherlock");
cmd.arg("-T").arg("all").arg("Sherlock");
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(file_type_clear, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "Sherlock");
dir.create("file.rs", "Sherlock");
cmd.arg("--type-clear").arg("rust").arg("-t").arg("rust").arg("Sherlock");
cmd.assert_non_empty_stderr();
});
rgtest!(file_type_add, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "Sherlock");
dir.create("file.rs", "Sherlock");
dir.create("file.wat", "Sherlock");
cmd.args(&[
"--type-add", "wat:*.wat", "-t", "wat", "Sherlock",
]);
eqnice!("file.wat:Sherlock\n", cmd.stdout());
});
rgtest!(file_type_add_compose, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "Sherlock");
dir.create("file.rs", "Sherlock");
dir.create("file.wat", "Sherlock");
cmd.args(&[
"--type-add", "wat:*.wat",
"--type-add", "combo:include:wat,py",
"-t", "combo",
"Sherlock",
]);
let expected = "\
file.py:Sherlock
file.wat:Sherlock
";
eqnice!(expected, sort_lines(&cmd.stdout()));
});
rgtest!(glob, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "Sherlock");
dir.create("file.rs", "Sherlock");
cmd.arg("-g").arg("*.rs").arg("Sherlock");
eqnice!("file.rs:Sherlock\n", cmd.stdout());
});
rgtest!(glob_negate, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.remove("sherlock");
dir.create("file.py", "Sherlock");
dir.create("file.rs", "Sherlock");
cmd.arg("-g").arg("!*.rs").arg("Sherlock");
eqnice!("file.py:Sherlock\n", cmd.stdout());
});
rgtest!(glob_case_insensitive, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.HTML", "Sherlock");
cmd.arg("--iglob").arg("*.html").arg("Sherlock");
eqnice!("file.HTML:Sherlock\n", cmd.stdout());
});
rgtest!(glob_case_sensitive, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file1.HTML", "Sherlock");
dir.create("file2.html", "Sherlock");
cmd.arg("--glob").arg("*.html").arg("Sherlock");
eqnice!("file2.html:Sherlock\n", cmd.stdout());
});
rgtest!(byte_offset_only_matching, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-b").arg("-o").arg("Sherlock");
let expected = "\
sherlock:56:Sherlock
sherlock:177:Sherlock
";
eqnice!(expected, cmd.stdout());
});
rgtest!(count, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--count").arg("Sherlock");
let expected = "sherlock:2\n";
eqnice!(expected, cmd.stdout());
});
rgtest!(count_matches, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--count-matches").arg("the");
let expected = "sherlock:4\n";
eqnice!(expected, cmd.stdout());
});
rgtest!(count_matches_inverted, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--count-matches").arg("--invert-match").arg("Sherlock");
let expected = "sherlock:4\n";
eqnice!(expected, cmd.stdout());
});
rgtest!(count_matches_via_only, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--count").arg("--only-matching").arg("the");
let expected = "sherlock:4\n";
eqnice!(expected, cmd.stdout());
});
rgtest!(files_with_matches, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--files-with-matches").arg("Sherlock");
let expected = "sherlock\n";
eqnice!(expected, cmd.stdout());
});
rgtest!(files_without_match, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file.py", "foo");
cmd.arg("--files-without-match").arg("Sherlock");
let expected = "file.py\n";
eqnice!(expected, cmd.stdout());
});
rgtest!(after_context, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-A").arg("1").arg("Sherlock").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
Holmeses, success in the province of detective work must always
be, to a very large extent, the result of luck. Sherlock Holmes
can extract a clew from a wisp of straw or a flake of cigar ash;
";
eqnice!(expected, cmd.stdout());
});
rgtest!(after_context_line_numbers, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-A").arg("1").arg("-n").arg("Sherlock").arg("sherlock");
let expected = "\
1:For the Doctor Watsons of this world, as opposed to the Sherlock
2-Holmeses, success in the province of detective work must always
3:be, to a very large extent, the result of luck. Sherlock Holmes
4-can extract a clew from a wisp of straw or a flake of cigar ash;
";
eqnice!(expected, cmd.stdout());
});
rgtest!(before_context, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-B").arg("1").arg("Sherlock").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
Holmeses, success in the province of detective work must always
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(before_context_line_numbers, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-B").arg("1").arg("-n").arg("Sherlock").arg("sherlock");
let expected = "\
1:For the Doctor Watsons of this world, as opposed to the Sherlock
2-Holmeses, success in the province of detective work must always
3:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(context, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-C").arg("1").arg("world|attached").arg("sherlock");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
Holmeses, success in the province of detective work must always
--
but Doctor Watson has to have it taken out for him and dusted,
and exhibited clearly, with a label attached.
";
eqnice!(expected, cmd.stdout());
});
rgtest!(context_line_numbers, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("-C").arg("1").arg("-n").arg("world|attached").arg("sherlock");
let expected = "\
1:For the Doctor Watsons of this world, as opposed to the Sherlock
2-Holmeses, success in the province of detective work must always
--
5-but Doctor Watson has to have it taken out for him and dusted,
6:and exhibited clearly, with a label attached.
";
eqnice!(expected, cmd.stdout());
});
rgtest!(max_filesize_parse_errro_length, |_: Dir, mut cmd: TestCommand| {
cmd.arg("--max-filesize").arg("44444444444444444444");
cmd.assert_non_empty_stderr();
});
rgtest!(max_filesize_parse_error_suffix, |_: Dir, mut cmd: TestCommand| {
cmd.arg("--max-filesize").arg("45k");
cmd.assert_non_empty_stderr();
});
rgtest!(max_filesize_parse_no_suffix, |dir: Dir, mut cmd: TestCommand| {
dir.create_size("foo", 40);
dir.create_size("bar", 60);
cmd.arg("--max-filesize").arg("50").arg("--files");
eqnice!("foo\n", cmd.stdout());
});
rgtest!(max_filesize_parse_k_suffix, |dir: Dir, mut cmd: TestCommand| {
dir.create_size("foo", 3048);
dir.create_size("bar", 4100);
cmd.arg("--max-filesize").arg("4K").arg("--files");
eqnice!("foo\n", cmd.stdout());
});
rgtest!(max_filesize_parse_m_suffix, |dir: Dir, mut cmd: TestCommand| {
dir.create_size("foo", 1000000);
dir.create_size("bar", 1400000);
cmd.arg("--max-filesize").arg("1M").arg("--files");
eqnice!("foo\n", cmd.stdout());
});
rgtest!(max_filesize_suffix_overflow, |dir: Dir, mut cmd: TestCommand| {
dir.create_size("foo", 1000000);
// 2^35 * 2^30 would otherwise overflow
cmd.arg("--max-filesize").arg("34359738368G").arg("--files");
cmd.assert_non_empty_stderr();
});
rgtest!(ignore_hidden, |dir: Dir, mut cmd: TestCommand| {
dir.create(".sherlock", SHERLOCK);
cmd.arg("Sherlock").assert_err();
});
rgtest!(no_ignore_hidden, |dir: Dir, mut cmd: TestCommand| {
dir.create(".sherlock", SHERLOCK);
cmd.arg("--hidden").arg("Sherlock");
let expected = "\
.sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
.sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(ignore_git, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create_dir(".git");
dir.create(".gitignore", "sherlock\n");
cmd.arg("Sherlock");
cmd.assert_err();
});
rgtest!(ignore_generic, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create(".ignore", "sherlock\n");
cmd.arg("Sherlock");
cmd.assert_err();
});
rgtest!(ignore_ripgrep, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create(".rgignore", "sherlock\n");
cmd.arg("Sherlock");
cmd.assert_err();
});
rgtest!(no_ignore, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create(".gitignore", "sherlock\n");
cmd.arg("--no-ignore").arg("Sherlock");
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(ignore_git_parent, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "sherlock\n");
dir.create_dir("foo");
dir.create("foo/sherlock", SHERLOCK);
cmd.arg("Sherlock");
// Even though we search in foo/, which has no .gitignore, ripgrep will
// traverse parent directories and respect the gitignore files found.
cmd.current_dir(dir.path().join("foo"));
cmd.assert_err();
});
rgtest!(ignore_git_parent_stop, |dir: Dir, mut cmd: TestCommand| {
// This tests that searching parent directories for .gitignore files stops
// after it sees a .git directory. To test this, we create this directory
// hierarchy:
//
// .gitignore (contains `sherlock`)
// foo/
// .git/
// bar/
// sherlock
//
// And we perform the search inside `foo/bar/`. ripgrep will stop looking
// for .gitignore files after it sees `foo/.git/`, and therefore not
// respect the top-level `.gitignore` containing `sherlock`.
dir.create(".gitignore", "sherlock\n");
dir.create_dir("foo");
dir.create_dir("foo/.git");
dir.create_dir("foo/bar");
dir.create("foo/bar/sherlock", SHERLOCK);
cmd.arg("Sherlock");
cmd.current_dir(dir.path().join("foo").join("bar"));
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
// Like ignore_git_parent_stop, but with a .git file instead of a .git
// directory.
rgtest!(ignore_git_parent_stop_file, |dir: Dir, mut cmd: TestCommand| {
// This tests that searching parent directories for .gitignore files stops
// after it sees a .git *file*. A .git file is used for submodules. To test
// this, we create this directory hierarchy:
//
// .gitignore (contains `sherlock`)
// foo/
// .git
// bar/
// sherlock
//
// And we perform the search inside `foo/bar/`. ripgrep will stop looking
// for .gitignore files after it sees `foo/.git`, and therefore not
// respect the top-level `.gitignore` containing `sherlock`.
dir.create(".gitignore", "sherlock\n");
dir.create_dir("foo");
dir.create("foo/.git", "");
dir.create_dir("foo/bar");
dir.create("foo/bar/sherlock", SHERLOCK);
cmd.arg("Sherlock");
cmd.current_dir(dir.path().join("foo").join("bar"));
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(ignore_ripgrep_parent_no_stop, |dir: Dir, mut cmd: TestCommand| {
// This is like the `ignore_git_parent_stop` test, except it checks that
// ripgrep *doesn't* stop checking for .rgignore files.
dir.create(".rgignore", "sherlock\n");
dir.create_dir("foo");
dir.create_dir("foo/.git");
dir.create_dir("foo/bar");
dir.create("foo/bar/sherlock", SHERLOCK);
cmd.arg("Sherlock");
cmd.current_dir(dir.path().join("foo").join("bar"));
// The top-level .rgignore applies.
cmd.assert_err();
});
rgtest!(no_parent_ignore_git, |dir: Dir, mut cmd: TestCommand| {
// Set up a directory hierarchy like this:
//
// .git/
// .gitignore
// foo/
// .gitignore
// sherlock
// watson
//
// Where `.gitignore` contains `sherlock` and `foo/.gitignore` contains
// `watson`.
//
// Now *do the search* from the foo directory. By default, ripgrep will
// search parent directories for .gitignore files. The --no-ignore-parent
// flag should prevent that. At the same time, the `foo/.gitignore` file
// will still be respected (since the search is happening in `foo/`).
//
// In other words, we should only see results from `sherlock`, not from
// `watson`.
dir.create_dir(".git");
dir.create(".gitignore", "sherlock\n");
dir.create_dir("foo");
dir.create("foo/.gitignore", "watson\n");
dir.create("foo/sherlock", SHERLOCK);
dir.create("foo/watson", SHERLOCK);
cmd.arg("--no-ignore-parent").arg("Sherlock");
cmd.current_dir(dir.path().join("foo"));
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(symlink_nofollow, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("foo");
dir.create_dir("foo/bar");
dir.link_dir("foo/baz", "foo/bar/baz");
dir.create_dir("foo/baz");
dir.create("foo/baz/sherlock", SHERLOCK);
cmd.arg("Sherlock");
cmd.current_dir(dir.path().join("foo/bar"));
cmd.assert_err();
});
#[cfg(not(windows))]
rgtest!(symlink_follow, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("foo");
dir.create_dir("foo/bar");
dir.create_dir("foo/baz");
dir.create("foo/baz/sherlock", SHERLOCK);
dir.link_dir("foo/baz", "foo/bar/baz");
cmd.arg("-L").arg("Sherlock");
cmd.current_dir(dir.path().join("foo/bar"));
let expected = "\
baz/sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
baz/sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(unrestricted1, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create(".gitignore", "sherlock\n");
cmd.arg("-u").arg("Sherlock");
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(unrestricted2, |dir: Dir, mut cmd: TestCommand| {
dir.create(".sherlock", SHERLOCK);
cmd.arg("-uu").arg("Sherlock");
let expected = "\
.sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
.sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(unrestricted3, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("file", "foo\x00bar\nfoo\x00baz\n");
cmd.arg("-uuu").arg("foo");
let expected = "\
file:foo\x00bar
file:foo\x00baz
";
eqnice!(expected, cmd.stdout());
});
rgtest!(vimgrep, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--vimgrep").arg("Sherlock|Watson");
let expected = "\
sherlock:1:16:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:1:57:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:3:49:be, to a very large extent, the result of luck. Sherlock Holmes
sherlock:5:12:but Doctor Watson has to have it taken out for him and dusted,
";
eqnice!(expected, cmd.stdout());
});
rgtest!(vimgrep_no_line, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--vimgrep").arg("-N").arg("Sherlock|Watson");
let expected = "\
sherlock:16:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:57:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:49:be, to a very large extent, the result of luck. Sherlock Holmes
sherlock:12:but Doctor Watson has to have it taken out for him and dusted,
";
eqnice!(expected, cmd.stdout());
});
rgtest!(vimgrep_no_line_no_column, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.arg("--vimgrep").arg("-N").arg("--no-column").arg("Sherlock|Watson");
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
sherlock:but Doctor Watson has to have it taken out for him and dusted,
";
eqnice!(expected, cmd.stdout());
});
rgtest!(preprocessing, |dir: Dir, mut cmd: TestCommand| {
if !cmd_exists("xzcat") {
return;
}
dir.create_bytes("sherlock.xz", include_bytes!("./data/sherlock.xz"));
cmd.arg("--pre").arg("xzcat").arg("Sherlock").arg("sherlock.xz");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(compressed_gzip, |dir: Dir, mut cmd: TestCommand| {
if !cmd_exists("gzip") {
return;
}
dir.create_bytes("sherlock.gz", include_bytes!("./data/sherlock.gz"));
cmd.arg("-z").arg("Sherlock").arg("sherlock.gz");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(compressed_bzip2, |dir: Dir, mut cmd: TestCommand| {
if !cmd_exists("bzip2") {
return;
}
dir.create_bytes("sherlock.bz2", include_bytes!("./data/sherlock.bz2"));
cmd.arg("-z").arg("Sherlock").arg("sherlock.bz2");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(compressed_xz, |dir: Dir, mut cmd: TestCommand| {
if !cmd_exists("xz") {
return;
}
dir.create_bytes("sherlock.xz", include_bytes!("./data/sherlock.xz"));
cmd.arg("-z").arg("Sherlock").arg("sherlock.xz");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(compressed_lz4, |dir: Dir, mut cmd: TestCommand| {
if !cmd_exists("lz4") {
return;
}
dir.create_bytes("sherlock.lz4", include_bytes!("./data/sherlock.lz4"));
cmd.arg("-z").arg("Sherlock").arg("sherlock.lz4");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(compressed_lzma, |dir: Dir, mut cmd: TestCommand| {
if !cmd_exists("xz") {
return;
}
dir.create_bytes("sherlock.lzma", include_bytes!("./data/sherlock.lzma"));
cmd.arg("-z").arg("Sherlock").arg("sherlock.lzma");
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
be, to a very large extent, the result of luck. Sherlock Holmes
";
eqnice!(expected, cmd.stdout());
});
rgtest!(compressed_failing_gzip, |dir: Dir, mut cmd: TestCommand| {
if !cmd_exists("gzip") {
return;
}
dir.create("sherlock.gz", SHERLOCK);
cmd.arg("-z").arg("Sherlock").arg("sherlock.gz");
cmd.assert_non_empty_stderr();
});
rgtest!(binary_nosearch, |dir: Dir, mut cmd: TestCommand| {
dir.create("file", "foo\x00bar\nfoo\x00baz\n");
cmd.arg("foo").arg("file");
cmd.assert_err();
});
// The following two tests show a discrepancy in search results between
// searching with memory mapped files and stream searching. Stream searching
// uses a heuristic (that GNU grep also uses) where NUL bytes are replaced with
// the EOL terminator, which tends to avoid allocating large amounts of memory
// for really long "lines." The memory map searcher has no need to worry about
// such things, and more than that, it would be pretty hard for it to match the
// semantics of streaming search in this case.
//
// Binary files with lots of NULs aren't really part of the use case of ripgrep
// (or any other grep-like tool for that matter), so we shouldn't feel too bad
// about it.
rgtest!(binary_search_mmap, |dir: Dir, mut cmd: TestCommand| {
dir.create("file", "foo\x00bar\nfoo\x00baz\n");
cmd.arg("-a").arg("--mmap").arg("foo").arg("file");
eqnice!("foo\x00bar\nfoo\x00baz\n", cmd.stdout());
});
rgtest!(binary_search_no_mmap, |dir: Dir, mut cmd: TestCommand| {
dir.create("file", "foo\x00bar\nfoo\x00baz\n");
cmd.arg("-a").arg("--no-mmap").arg("foo").arg("file");
eqnice!("foo\x00bar\nfoo\x00baz\n", cmd.stdout());
});
rgtest!(files, |dir: Dir, mut cmd: TestCommand| {
dir.create("file", "");
dir.create_dir("dir");
dir.create("dir/file", "");
cmd.arg("--files");
eqnice!(sort_lines("file\ndir/file\n"), sort_lines(&cmd.stdout()));
});
rgtest!(type_list, |_: Dir, mut cmd: TestCommand| {
cmd.arg("--type-list");
// This can change over time, so just make sure we print something.
assert!(!cmd.stdout().is_empty());
});

109
tests/multiline.rs Normal file
View File

@@ -0,0 +1,109 @@
use hay::SHERLOCK;
use util::{Dir, TestCommand};
// This tests that multiline matches that span multiple lines, but where
// multiple matches may begin and end on the same line work correctly.
rgtest!(overlap1, |dir: Dir, mut cmd: TestCommand| {
dir.create("test", "xxx\nabc\ndefxxxabc\ndefxxx\nxxx");
cmd.arg("-n").arg("-U").arg("abc\ndef").arg("test");
eqnice!("2:abc\n3:defxxxabc\n4:defxxx\n", cmd.stdout());
});
// Like overlap1, but tests the case where one match ends at precisely the same
// location at which the next match begins.
rgtest!(overlap2, |dir: Dir, mut cmd: TestCommand| {
dir.create("test", "xxx\nabc\ndefabc\ndefxxx\nxxx");
cmd.arg("-n").arg("-U").arg("abc\ndef").arg("test");
eqnice!("2:abc\n3:defabc\n4:defxxx\n", cmd.stdout());
});
// Tests that even in a multiline search, a '.' does not match a newline.
rgtest!(dot_no_newline, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-n", "-U", "of this world.+detective work", "sherlock",
]);
cmd.assert_err();
});
// Tests that the --multiline-dotall flag causes '.' to match a newline.
rgtest!(dot_all, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-n", "-U", "--multiline-dotall",
"of this world.+detective work", "sherlock",
]);
let expected = "\
1:For the Doctor Watsons of this world, as opposed to the Sherlock
2:Holmeses, success in the province of detective work must always
";
eqnice!(expected, cmd.stdout());
});
// Tests that --only-matching works in multiline mode.
rgtest!(only_matching, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-n", "-U", "--only-matching",
r"Watson|Sherlock\p{Any}+?Holmes", "sherlock",
]);
let expected = "\
1:Watson
1:Sherlock
2:Holmes
3:Sherlock Holmes
5:Watson
";
eqnice!(expected, cmd.stdout());
});
// Tests that --vimgrep works in multiline mode.
rgtest!(vimgrep, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-n", "-U", "--vimgrep",
r"Watson|Sherlock\p{Any}+?Holmes", "sherlock",
]);
let expected = "\
sherlock:1:16:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:1:57:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:2:57:Holmeses, success in the province of detective work must always
sherlock:3:49:be, to a very large extent, the result of luck. Sherlock Holmes
sherlock:5:12:but Doctor Watson has to have it taken out for him and dusted,
";
eqnice!(expected, cmd.stdout());
});
// Tests that multiline search works when reading from stdin. This is an
// important test because multiline search must read the entire contents of
// what it is searching into memory before executing the search.
rgtest!(stdin, |_: Dir, mut cmd: TestCommand| {
cmd.args(&[
"-n", "-U", r"of this world\p{Any}+?detective work",
]);
let expected = "\
1:For the Doctor Watsons of this world, as opposed to the Sherlock
2:Holmeses, success in the province of detective work must always
";
eqnice!(expected, cmd.pipe(SHERLOCK));
});
// Test that multiline search and contextual matches work.
rgtest!(context, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
cmd.args(&[
"-n", "-U", "-C1",
r"detective work\p{Any}+?result of luck", "sherlock",
]);
let expected = "\
1-For the Doctor Watsons of this world, as opposed to the Sherlock
2:Holmeses, success in the province of detective work must always
3:be, to a very large extent, the result of luck. Sherlock Holmes
4-can extract a clew from a wisp of straw or a flake of cigar ash;
";
eqnice!(expected, cmd.stdout());
});

564
tests/regression.rs Normal file
View File

@@ -0,0 +1,564 @@
use hay::SHERLOCK;
use util::{Dir, TestCommand, sort_lines};
// See: https://github.com/BurntSushi/ripgrep/issues/16
rgtest!(r16, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "ghi/");
dir.create_dir("ghi");
dir.create_dir("def/ghi");
dir.create("ghi/toplevel.txt", "xyz");
dir.create("def/ghi/subdir.txt", "xyz");
cmd.arg("xyz").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/25
rgtest!(r25, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "/llvm/");
dir.create_dir("src/llvm");
dir.create("src/llvm/foo", "test");
cmd.arg("test");
eqnice!("src/llvm/foo:test\n", cmd.stdout());
cmd.current_dir(dir.path().join("src"));
eqnice!("llvm/foo:test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/30
rgtest!(r30, |dir: Dir, mut cmd: TestCommand| {
dir.create(".gitignore", "vendor/**\n!vendor/manifest");
dir.create_dir("vendor");
dir.create("vendor/manifest", "test");
eqnice!("vendor/manifest:test\n", cmd.arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/49
rgtest!(r49, |dir: Dir, mut cmd: TestCommand| {
dir.create(".gitignore", "foo/bar");
dir.create_dir("test/foo/bar");
dir.create("test/foo/bar/baz", "test");
cmd.arg("xyz").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/50
rgtest!(r50, |dir: Dir, mut cmd: TestCommand| {
dir.create(".gitignore", "XXX/YYY/");
dir.create_dir("abc/def/XXX/YYY");
dir.create_dir("ghi/XXX/YYY");
dir.create("abc/def/XXX/YYY/bar", "test");
dir.create("ghi/XXX/YYY/bar", "test");
cmd.arg("xyz").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/64
rgtest!(r64, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("dir");
dir.create_dir("foo");
dir.create("dir/abc", "");
dir.create("foo/abc", "");
eqnice!("foo/abc\n", cmd.arg("--files").arg("foo").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/65
rgtest!(r65, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "a/");
dir.create_dir("a");
dir.create("a/foo", "xyz");
dir.create("a/bar", "xyz");
cmd.arg("xyz").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/67
rgtest!(r67, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "/*\n!/dir");
dir.create_dir("dir");
dir.create_dir("foo");
dir.create("foo/bar", "test");
dir.create("dir/bar", "test");
eqnice!("dir/bar:test\n", cmd.arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/87
rgtest!(r87, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "foo\n**no-vcs**");
dir.create("foo", "test");
cmd.arg("test").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/90
rgtest!(r90, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "!.foo");
dir.create(".foo", "test");
eqnice!(".foo:test\n", cmd.arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/93
rgtest!(r93, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "192.168.1.1");
eqnice!("foo:192.168.1.1\n", cmd.arg(r"(\d{1,3}\.){3}\d{1,3}").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/99
rgtest!(r99, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo1", "test");
dir.create("foo2", "zzz");
dir.create("bar", "test");
eqnice!(
sort_lines("bar\ntest\n\nfoo1\ntest\n"),
sort_lines(&cmd.arg("-j1").arg("--heading").arg("test").stdout())
);
});
// See: https://github.com/BurntSushi/ripgrep/issues/105
rgtest!(r105_part1, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "zztest");
eqnice!("foo:1:3:zztest\n", cmd.arg("--vimgrep").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/105
rgtest!(r105_part2, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "zztest");
eqnice!("foo:1:3:zztest\n", cmd.arg("--column").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/127
rgtest!(r127, |dir: Dir, mut cmd: TestCommand| {
// Set up a directory hierarchy like this:
//
// .gitignore
// foo/
// sherlock
// watson
//
// Where `.gitignore` contains `foo/sherlock`.
//
// ripgrep should ignore 'foo/sherlock' giving us results only from
// 'foo/watson' but on Windows ripgrep will include both 'foo/sherlock' and
// 'foo/watson' in the search results.
dir.create_dir(".git");
dir.create(".gitignore", "foo/sherlock\n");
dir.create_dir("foo");
dir.create("foo/sherlock", SHERLOCK);
dir.create("foo/watson", SHERLOCK);
let expected = "\
foo/watson:For the Doctor Watsons of this world, as opposed to the Sherlock
foo/watson:be, to a very large extent, the result of luck. Sherlock Holmes
";
assert_eq!(expected, cmd.arg("Sherlock").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/128
rgtest!(r128, |dir: Dir, mut cmd: TestCommand| {
dir.create_bytes("foo", b"01234567\x0b\n\x0b\n\x0b\n\x0b\nx");
eqnice!("foo:5:x\n", cmd.arg("-n").arg("x").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/131
//
// TODO(burntsushi): Darwin doesn't like this test for some reason. Probably
// due to the weird file path.
#[cfg(not(target_os = "macos"))]
rgtest!(r131, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", "TopÑapa");
dir.create("TopÑapa", "test");
cmd.arg("test").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/137
//
// TODO(burntsushi): Figure out how to make this test work on Windows. Right
// now it gives "access denied" errors when trying to create a file symlink.
// For now, disable test on Windows.
#[cfg(not(windows))]
rgtest!(r137, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.link_file("sherlock", "sym1");
dir.link_file("sherlock", "sym2");
let expected = "\
./sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
./sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
sym1:For the Doctor Watsons of this world, as opposed to the Sherlock
sym1:be, to a very large extent, the result of luck. Sherlock Holmes
sym2:For the Doctor Watsons of this world, as opposed to the Sherlock
sym2:be, to a very large extent, the result of luck. Sherlock Holmes
";
cmd.arg("-j1").arg("Sherlock").arg("./").arg("sym1").arg("sym2");
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/156
rgtest!(r156, |dir: Dir, mut cmd: TestCommand| {
let expected = r#"#parse('widgets/foo_bar_macros.vm')
#parse ( 'widgets/mobile/foo_bar_macros.vm' )
#parse ("widgets/foobarhiddenformfields.vm")
#parse ( "widgets/foo_bar_legal.vm" )
#include( 'widgets/foo_bar_tips.vm' )
#include('widgets/mobile/foo_bar_macros.vm')
#include ("widgets/mobile/foo_bar_resetpw.vm")
#parse('widgets/foo-bar-macros.vm')
#parse ( 'widgets/mobile/foo-bar-macros.vm' )
#parse ("widgets/foo-bar-hiddenformfields.vm")
#parse ( "widgets/foo-bar-legal.vm" )
#include( 'widgets/foo-bar-tips.vm' )
#include('widgets/mobile/foo-bar-macros.vm')
#include ("widgets/mobile/foo-bar-resetpw.vm")
"#;
dir.create("testcase.txt", expected);
cmd.arg("-N");
cmd.arg(r#"#(?:parse|include)\s*\(\s*(?:"|')[./A-Za-z_-]+(?:"|')"#);
cmd.arg("testcase.txt");
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/184
rgtest!(r184, |dir: Dir, mut cmd: TestCommand| {
dir.create(".gitignore", ".*");
dir.create_dir("foo/bar");
dir.create("foo/bar/baz", "test");
cmd.arg("test");
eqnice!("foo/bar/baz:test\n", cmd.stdout());
cmd.current_dir(dir.path().join("./foo/bar"));
eqnice!("baz:test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/199
rgtest!(r199, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "tEsT");
eqnice!("foo:tEsT\n", cmd.arg("--smart-case").arg(r"\btest\b").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/206
rgtest!(r206, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("foo");
dir.create("foo/bar.txt", "test");
cmd.arg("test").arg("-g").arg("*.txt");
eqnice!("foo/bar.txt:test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/210
#[cfg(unix)]
rgtest!(r210, |dir: Dir, mut cmd: TestCommand| {
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
let badutf8 = OsStr::from_bytes(&b"foo\xffbar"[..]);
// APFS does not support creating files with invalid UTF-8 bytes.
// https://github.com/BurntSushi/ripgrep/issues/559
if dir.try_create(badutf8, "test").is_ok() {
cmd.arg("-H").arg("test").arg(badutf8);
assert_eq!(b"foo\xffbar:test\n".to_vec(), cmd.output().stdout);
}
});
// See: https://github.com/BurntSushi/ripgrep/issues/228
rgtest!(r228, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("foo");
cmd.arg("--ignore-file").arg("foo").arg("test").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/229
rgtest!(r229, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "economie");
cmd.arg("-S").arg("[E]conomie").assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/251
rgtest!(r251, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "привет\nПривет\nПрИвЕт");
let expected = "foo:привет\nfoo:Привет\nfoo:ПрИвЕт\n";
eqnice!(expected, cmd.arg("-i").arg("привет").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/256
#[cfg(not(windows))]
rgtest!(r256, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("bar");
dir.create("bar/baz", "test");
dir.link_dir("bar", "foo");
eqnice!("foo/baz:test\n", cmd.arg("test").arg("foo").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/256
#[cfg(not(windows))]
rgtest!(r256_j1, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("bar");
dir.create("bar/baz", "test");
dir.link_dir("bar", "foo");
eqnice!("foo/baz:test\n", cmd.arg("-j1").arg("test").arg("foo").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/270
rgtest!(r270, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "-test");
cmd.arg("-e").arg("-test");
eqnice!("foo:-test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/279
rgtest!(r279, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "test");
eqnice!("", cmd.arg("-q").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/391
rgtest!(r391, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create("lock", "");
dir.create("bar.py", "");
dir.create(".git/packed-refs", "");
dir.create(".git/description", "");
cmd.args(&[
"--no-ignore", "--hidden", "--follow", "--files",
"--glob",
"!{.git,node_modules,plugged}/**",
"--glob",
"*.{js,json,php,md,styl,scss,sass,pug,html,config,py,cpp,c,go,hs}",
]);
eqnice!("bar.py\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/405
rgtest!(r405, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir("foo/bar");
dir.create_dir("bar/foo");
dir.create("foo/bar/file1.txt", "test");
dir.create("bar/foo/file2.txt", "test");
cmd.arg("-g").arg("!/foo/**").arg("test");
eqnice!("bar/foo/file2.txt:test\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/428
#[cfg(not(windows))]
rgtest!(r428_color_context_path, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", "foo\nbar");
cmd.args(&[
"-A1", "-H", "--no-heading", "-N",
"--colors=match:none", "--color=always",
"foo",
]);
let expected = format!(
"{colored_path}:foo\n{colored_path}-bar\n",
colored_path=
"\x1b\x5b\x30\x6d\x1b\x5b\x33\x35\x6dsherlock\x1b\x5b\x30\x6d"
);
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/428
rgtest!(r428_unrecognized_style, |_: Dir, mut cmd: TestCommand| {
cmd.arg("--colors=match:style:").arg("Sherlock");
cmd.assert_err();
let output = cmd.cmd().output().unwrap();
let stderr = String::from_utf8_lossy(&output.stderr);
let expected = "\
unrecognized style attribute ''. Choose from: nobold, bold, nointense, \
intense, nounderline, underline.
";
eqnice!(expected, stderr);
});
// See: https://github.com/BurntSushi/ripgrep/issues/451
rgtest!(r451_only_matching_as_in_issue, |dir: Dir, mut cmd: TestCommand| {
dir.create("digits.txt", "1 2 3\n");
cmd.arg("--only-matching").arg(r"[0-9]+").arg("digits.txt");
let expected = "\
1
2
3
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/451
rgtest!(r451_only_matching, |dir: Dir, mut cmd: TestCommand| {
dir.create("digits.txt", "1 2 3\n123\n");
cmd.args(&[
"--only-matching", "--column", r"[0-9]", "digits.txt",
]);
let expected = "\
1:1:1
1:3:2
1:5:3
2:1:1
2:2:2
2:3:3
";
eqnice!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/483
rgtest!(r483_matching_no_stdout, |dir: Dir, mut cmd: TestCommand| {
dir.create("file.py", "");
cmd.arg("--quiet").arg("--files").arg("--glob").arg("*.py");
eqnice!("", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/483
rgtest!(r483_non_matching_exit_code, |dir: Dir, mut cmd: TestCommand| {
dir.create("file.rs", "");
cmd.arg("--quiet").arg("--files").arg("--glob").arg("*.py");
cmd.assert_err();
});
// See: https://github.com/BurntSushi/ripgrep/issues/493
rgtest!(r493, |dir: Dir, mut cmd: TestCommand| {
dir.create("input.txt", "peshwaship 're seminomata");
cmd.arg("-o").arg(r"\b 're \b").arg("input.txt");
assert_eq!(" 're \n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/506
rgtest!(r506_word_not_parenthesized, |dir: Dir, mut cmd: TestCommand| {
dir.create("wb.txt", "min minimum amin\nmax maximum amax");
cmd.arg("-w").arg("-o").arg("min|max").arg("wb.txt");
eqnice!("min\nmax\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/553
rgtest!(r553_switch, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
sherlock:For the Doctor Watsons of this world, as opposed to the Sherlock
sherlock:be, to a very large extent, the result of luck. Sherlock Holmes
";
cmd.arg("-i").arg("sherlock");
eqnice!(expected, cmd.stdout());
// Repeat the `i` flag to make sure everything still works.
eqnice!(expected, cmd.arg("-i").stdout());
});
rgtest!(r553_flag, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
Holmeses, success in the province of detective work must always
--
but Doctor Watson has to have it taken out for him and dusted,
and exhibited clearly, with a label attached.
";
cmd.arg("-C").arg("1").arg(r"world|attached").arg("sherlock");
eqnice!(expected, cmd.stdout());
let expected = "\
For the Doctor Watsons of this world, as opposed to the Sherlock
and exhibited clearly, with a label attached.
";
eqnice!(expected, cmd.arg("-C").arg("0").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/568
rgtest!(r568_leading_hyphen_option_args, |dir: Dir, mut cmd: TestCommand| {
dir.create("file", "foo bar -baz\n");
cmd.arg("-e-baz").arg("-e").arg("-baz").arg("file");
eqnice!("foo bar -baz\n", cmd.stdout());
let mut cmd = dir.command();
cmd.arg("-rni").arg("bar").arg("file");
eqnice!("foo ni -baz\n", cmd.stdout());
let mut cmd = dir.command();
cmd.arg("-r").arg("-n").arg("-i").arg("bar").arg("file");
eqnice!("foo -n -baz\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/599
//
// This test used to check that we emitted color escape sequences even for
// empty matches, but with the addition of the JSON output format, clients no
// longer need to rely on escape sequences to parse matches. Therefore, we no
// longer emit useless escape sequences.
rgtest!(r599, |dir: Dir, mut cmd: TestCommand| {
dir.create("input.txt", "\n\ntest\n");
cmd.args(&[
"--color", "ansi",
"--colors", "path:none",
"--colors", "line:none",
"--colors", "match:fg:red",
"--colors", "match:style:nobold",
"--line-number",
r"^$",
"input.txt",
]);
let expected = "\
1:
2:
";
eqnice_repr!(expected, cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/693
rgtest!(r693_context_in_contextless_mode, |dir: Dir, mut cmd: TestCommand| {
dir.create("foo", "xyz\n");
dir.create("bar", "xyz\n");
cmd.arg("-C1").arg("-c").arg("--sort-files").arg("xyz");
eqnice!("bar:1\nfoo:1\n", cmd.stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/807
rgtest!(r807, |dir: Dir, mut cmd: TestCommand| {
dir.create_dir(".git");
dir.create(".gitignore", ".a/b");
dir.create_dir(".a/b");
dir.create_dir(".a/c");
dir.create(".a/b/file", "test");
dir.create(".a/c/file", "test");
eqnice!(".a/c/file:test\n", cmd.arg("--hidden").arg("test").stdout());
});
// See: https://github.com/BurntSushi/ripgrep/issues/900
rgtest!(r900, |dir: Dir, mut cmd: TestCommand| {
dir.create("sherlock", SHERLOCK);
dir.create("pat", "");
cmd.arg("-fpat").arg("sherlock").assert_err();
});

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,10 @@
use std::env;
use std::error;
use std::fmt;
use std::ffi::OsStr;
use std::fs::{self, File};
use std::io::{self, Write};
use std::path::{Path, PathBuf};
use std::process;
use std::str::FromStr;
use std::process::{self, Command};
use std::sync::atomic::{ATOMIC_USIZE_INIT, AtomicUsize, Ordering};
use std::thread;
use std::time::Duration;
@@ -13,24 +12,60 @@ use std::time::Duration;
static TEST_DIR: &'static str = "ripgrep-tests";
static NEXT_ID: AtomicUsize = ATOMIC_USIZE_INIT;
/// `WorkDir` represents a directory in which tests are run.
/// Setup an empty work directory and return a command pointing to the ripgrep
/// executable whose CWD is set to the work directory.
///
/// The name given will be used to create the directory. Generally, it should
/// correspond to the test name.
pub fn setup(test_name: &str) -> (Dir, TestCommand) {
let dir = Dir::new(test_name);
let cmd = dir.command();
(dir, cmd)
}
/// Like `setup`, but uses PCRE2 as the underlying regex engine.
pub fn setup_pcre2(test_name: &str) -> (Dir, TestCommand) {
let mut dir = Dir::new(test_name);
dir.pcre2(true);
let cmd = dir.command();
(dir, cmd)
}
/// Break the given string into lines, sort them and then join them back
/// together. This is useful for testing output from ripgrep that may not
/// always be in the same order.
pub fn sort_lines(lines: &str) -> String {
let mut lines: Vec<&str> = lines.trim().lines().collect();
lines.sort();
format!("{}\n", lines.join("\n"))
}
/// Returns true if and only if the given program can be successfully executed
/// with a `--help` flag.
pub fn cmd_exists(program: &str) -> bool {
Command::new(program).arg("--help").output().is_ok()
}
/// Dir represents a directory in which tests should be run.
///
/// Directories are created from a global atomic counter to avoid duplicates.
#[derive(Debug)]
pub struct WorkDir {
#[derive(Clone, Debug)]
pub struct Dir {
/// The directory in which this test executable is running.
root: PathBuf,
/// The directory in which the test should run. If a test needs to create
/// files, they should go in here. This directory is also used as the CWD
/// for any processes created by the test.
dir: PathBuf,
/// Set to true when the test should use PCRE2 as the regex engine.
pcre2: bool,
}
impl WorkDir {
impl Dir {
/// Create a new test working directory with the given name. The name
/// does not need to be distinct for each invocation, but should correspond
/// to a logical grouping of tests.
pub fn new(name: &str) -> WorkDir {
pub fn new(name: &str) -> Dir {
let id = NEXT_ID.fetch_add(1, Ordering::SeqCst);
let root = env::current_exe()
.unwrap()
@@ -42,12 +77,24 @@ impl WorkDir {
.join(name)
.join(&format!("{}", id));
nice_err(&dir, repeat(|| fs::create_dir_all(&dir)));
WorkDir {
Dir {
root: root,
dir: dir,
pcre2: false,
}
}
/// Use PCRE2 for this test.
pub fn pcre2(&mut self, yes: bool) {
self.pcre2 = yes;
}
/// Returns true if and only if this test is configured to use PCRE2 as
/// the regex engine.
pub fn is_pcre2(&self) -> bool {
self.pcre2
}
/// Create a new file with the given name and contents in this directory,
/// or panic on error.
pub fn create<P: AsRef<Path>>(&self, name: P, contents: &str) {
@@ -75,18 +122,19 @@ impl WorkDir {
/// Create a new file with the given name and contents in this directory,
/// or panic on error.
pub fn create_bytes<P: AsRef<Path>>(&self, name: P, contents: &[u8]) {
let path = self.dir.join(name);
nice_err(&path, self.try_create_bytes(&path, contents));
let path = self.dir.join(&name);
nice_err(&path, self.try_create_bytes(name, contents));
}
/// Try to create a new file with the given name and contents in this
/// directory.
fn try_create_bytes<P: AsRef<Path>>(
pub fn try_create_bytes<P: AsRef<Path>>(
&self,
path: P,
name: P,
contents: &[u8],
) -> io::Result<()> {
let mut file = File::create(&path)?;
let path = self.dir.join(name);
let mut file = File::create(path)?;
file.write_all(contents)?;
file.flush()
}
@@ -106,11 +154,22 @@ impl WorkDir {
/// Creates a new command that is set to use the ripgrep executable in
/// this working directory.
pub fn command(&self) -> process::Command {
///
/// This also:
///
/// * Unsets the `RIPGREP_CONFIG_PATH` environment variable.
/// * Sets the `--path-separator` to `/` so that paths have the same output
/// on all systems. Tests that need to check `--path-separator` itself
/// can simply pass it again to override it.
pub fn command(&self) -> TestCommand {
let mut cmd = process::Command::new(&self.bin());
cmd.env_remove("RIPGREP_CONFIG_PATH");
cmd.current_dir(&self.dir);
cmd
cmd.arg("--path-separator").arg("/");
if self.is_pcre2() {
cmd.arg("--pcre2");
}
TestCommand { dir: self.clone(), cmd: cmd }
}
/// Returns the path to the ripgrep executable.
@@ -174,16 +233,54 @@ impl WorkDir {
let _ = fs::remove_file(&target);
nice_err(&target, symlink_file(&src, &target));
}
}
/// A simple wrapper around a process::Command with some conveniences.
#[derive(Debug)]
pub struct TestCommand {
/// The dir used to launched this command.
dir: Dir,
/// The actual command we use to control the process.
cmd: Command,
}
impl TestCommand {
/// Returns a mutable reference to the underlying command.
pub fn cmd(&mut self) -> &mut Command {
&mut self.cmd
}
/// Add an argument to pass to the command.
pub fn arg<A: AsRef<OsStr>>(&mut self, arg: A) -> &mut TestCommand {
self.cmd.arg(arg);
self
}
/// Add any number of arguments to the command.
pub fn args<I, A>(
&mut self,
args: I,
) -> &mut TestCommand
where I: IntoIterator<Item=A>,
A: AsRef<OsStr>
{
self.cmd.args(args);
self
}
/// Set the working directory for this command.
///
/// Note that this does not need to be called normally, since the creation
/// of this TestCommand causes its working directory to be set to the
/// test's directory automatically.
pub fn current_dir<P: AsRef<Path>>(&mut self, dir: P) -> &mut TestCommand {
self.cmd.current_dir(dir);
self
}
/// Runs and captures the stdout of the given command.
///
/// If the return type could not be created from a string, then this
/// panics.
pub fn stdout<E: fmt::Debug, T: FromStr<Err=E>>(
&self,
cmd: &mut process::Command,
) -> T {
let o = self.output(cmd);
pub fn stdout(&mut self) -> String {
let o = self.output();
let stdout = String::from_utf8_lossy(&o.stdout);
match stdout.parse() {
Ok(t) => t,
@@ -197,23 +294,13 @@ impl WorkDir {
}
}
/// Gets the output of a command. If the command failed, then this panics.
pub fn output(&self, cmd: &mut process::Command) -> process::Output {
let output = cmd.output().unwrap();
self.expect_success(cmd, output)
}
/// Pipe `input` to a command, and collect the output.
pub fn pipe(
&self,
cmd: &mut process::Command,
input: &str
) -> process::Output {
cmd.stdin(process::Stdio::piped());
cmd.stdout(process::Stdio::piped());
cmd.stderr(process::Stdio::piped());
pub fn pipe(&mut self, input: &str) -> String {
self.cmd.stdin(process::Stdio::piped());
self.cmd.stdout(process::Stdio::piped());
self.cmd.stderr(process::Stdio::piped());
let mut child = cmd.spawn().unwrap();
let mut child = self.cmd.spawn().unwrap();
// Pipe input to child process using a separate thread to avoid
// risk of deadlock between parent and child process.
@@ -223,20 +310,86 @@ impl WorkDir {
write!(stdin, "{}", input)
});
let output = self.expect_success(
cmd,
child.wait_with_output().unwrap(),
);
let output = self.expect_success(child.wait_with_output().unwrap());
worker.join().unwrap().unwrap();
output
let stdout = String::from_utf8_lossy(&output.stdout);
match stdout.parse() {
Ok(t) => t,
Err(err) => {
panic!(
"could not convert from string: {:?}\n\n{}",
err,
stdout
);
}
}
}
/// If `o` is not the output of a successful process run
fn expect_success(
&self,
cmd: &process::Command,
o: process::Output
) -> process::Output {
/// Gets the output of a command. If the command failed, then this panics.
pub fn output(&mut self) -> process::Output {
let output = self.cmd.output().unwrap();
self.expect_success(output)
}
/// Runs the command and asserts that it resulted in an error exit code.
pub fn assert_err(&mut self) {
let o = self.cmd.output().unwrap();
if o.status.success() {
panic!(
"\n\n===== {:?} =====\n\
command succeeded but expected failure!\
\n\ncwd: {}\
\n\nstatus: {}\
\n\nstdout: {}\n\nstderr: {}\
\n\n=====\n",
self.cmd,
self.dir.dir.display(),
o.status,
String::from_utf8_lossy(&o.stdout),
String::from_utf8_lossy(&o.stderr)
);
}
}
/// Runs the command and asserts that its exit code matches expected exit
/// code.
pub fn assert_exit_code(&mut self, expected_code: i32) {
let code = self.cmd.output().unwrap().status.code().unwrap();
assert_eq!(
expected_code, code,
"\n\n===== {:?} =====\n\
expected exit code did not match\
\n\nexpected: {}\
\n\nfound: {}\
\n\n=====\n",
self.cmd,
expected_code,
code
);
}
/// Runs the command and asserts that something was printed to stderr.
pub fn assert_non_empty_stderr(&mut self) {
let o = self.cmd.output().unwrap();
if o.status.success() || o.stderr.is_empty() {
panic!(
"\n\n===== {:?} =====\n\
command succeeded but expected failure!\
\n\ncwd: {}\
\n\nstatus: {}\
\n\nstdout: {}\n\nstderr: {}\
\n\n=====\n",
self.cmd,
self.dir.dir.display(),
o.status,
String::from_utf8_lossy(&o.stdout),
String::from_utf8_lossy(&o.stderr)
);
}
}
fn expect_success(&self, o: process::Output) -> process::Output {
if !o.status.success() {
let suggest =
if o.stderr.is_empty() {
@@ -254,81 +407,21 @@ impl WorkDir {
\n\nstdout: {}\
\n\nstderr: {}\
\n\n==========\n",
suggest, cmd, self.dir.display(), o.status,
suggest, self.cmd, self.dir.dir.display(), o.status,
String::from_utf8_lossy(&o.stdout),
String::from_utf8_lossy(&o.stderr));
}
o
}
/// Runs the given command and asserts that it resulted in an error exit
/// code.
pub fn assert_err(&self, cmd: &mut process::Command) {
let o = cmd.output().unwrap();
if o.status.success() {
panic!(
"\n\n===== {:?} =====\n\
command succeeded but expected failure!\
\n\ncwd: {}\
\n\nstatus: {}\
\n\nstdout: {}\n\nstderr: {}\
\n\n=====\n",
cmd,
self.dir.display(),
o.status,
String::from_utf8_lossy(&o.stdout),
String::from_utf8_lossy(&o.stderr)
);
}
}
/// Runs the given command and asserts that its exit code matches expected
/// exit code.
pub fn assert_exit_code(
&self,
expected_code: i32,
cmd: &mut process::Command,
) {
let code = cmd.status().unwrap().code().unwrap();
assert_eq!(
expected_code, code,
"\n\n===== {:?} =====\n\
expected exit code did not match\
\n\nexpected: {}\
\n\nfound: {}\
\n\n=====\n",
cmd, expected_code, code
);
}
/// Runs the given command and asserts that something was printed to
/// stderr.
pub fn assert_non_empty_stderr(&self, cmd: &mut process::Command) {
let o = cmd.output().unwrap();
if o.status.success() || o.stderr.is_empty() {
panic!("\n\n===== {:?} =====\n\
command succeeded but expected failure!\
\n\ncwd: {}\
\n\nstatus: {}\
\n\nstdout: {}\n\nstderr: {}\
\n\n=====\n",
cmd, self.dir.display(), o.status,
String::from_utf8_lossy(&o.stdout),
String::from_utf8_lossy(&o.stderr));
}
}
}
fn nice_err<P: AsRef<Path>, T, E: error::Error>(
path: P,
fn nice_err<T, E: error::Error>(
path: &Path,
res: Result<T, E>,
) -> T {
match res {
Ok(t) => t,
Err(err) => {
panic!("{}: {:?}", path.as_ref().display(), err);
}
Err(err) => panic!("{}: {:?}", path.display(), err),
}
}