Compare commits

...

125 Commits

Author SHA1 Message Date
Michael Eischer 22fe7015a5 FIXME 2024-05-18 22:09:50 +02:00
Michael Eischer 7b3ddd751d repository: wait max 1 minutes for lock removal if context is canceled
The toplevel context in restic only canceled if the user interrupts a
restic operation. If the network connection has failed this can require
waiting the full retry duration of 15 minutes which is a bad user
experience for interactive usage. Thus limit the delay to one minute in
this case.
2024-05-18 22:08:00 +02:00
Michael Eischer ebb726e621 retry: reduce total number of retries
Retries in restic try to solve two main problems:
- retry a temporarily failed operation
- tolerate temporary network interruptions

The first problem only requires a few retries, whereas the last one benefits
primarily from spreading the requests over a longer duration.

Increasing the default multiplier and the initial interval works for
both cases. The first few retries only take a few seconds, while later
retries quickly reach the maximum interval of one minute. This ensures
that the total number of retries issued by restic will remain at around
21 retries for a 15 minute period. As the concurrency in restic is
bounded, retries drastically reduce the number of requests sent to a
backend. This helps to prevent overloading the backend.
2024-05-18 22:06:48 +02:00
Michael Eischer 3b223a3d87 retry: ensure that there's always at least one retry
Previously, if an operation failed after 15 minutes, then it would never
be retried. This means that large backend requests are more unreliable
than smaller ones.
2024-05-18 22:06:47 +02:00
Michael Eischer 7c05a2c2ba retry: limit retries based on elapsed time not count
Depending on how long an operation takes to fail, the total retry
duration can currently vary between 1.5 and 15 minutes. In particular
for temporarily interrupted network connections, the former timeout is
too short. Thus always use a limit of 15 minutes.
2024-05-18 22:06:26 +02:00
Michael Eischer 3c996a40f9 retry: explicitly log failed requests
This simplifies finding the request in the log output that cause an
operation to fail.
2024-05-18 22:04:42 +02:00
Michael Eischer 1dfe1b8732
Merge pull request #4802 from MichaelEischer/backend-cleanups
Repository: Remove Backend() method
2024-05-18 22:02:45 +02:00
Michael Eischer 223aa22cb0 replace some uses of restic.Repository with finegrained interfaces 2024-05-18 21:42:51 +02:00
Michael Eischer 291c9677de restic/repository: remove Backend() method 2024-05-18 21:42:51 +02:00
Michael Eischer 673496b091 repository: clean cache between CheckPack retries
The cache cleanup pattern is also used in ListPack etc.
2024-05-18 21:42:51 +02:00
Michael Eischer 3d2410ed50 Replace some repo.RemoveUnpacked usages
These will eventually be blocked as they do not delete Snapshots.
2024-05-18 21:42:51 +02:00
Michael Eischer d2c26e33f3 repository: remove further usages of repo.Backend() 2024-05-18 21:42:51 +02:00
Michael Eischer 8a425c2f0a remove usages of repo.Backend() from tests 2024-05-18 21:42:51 +02:00
Michael Eischer aa4647f773 repository: unexport PackBlobIterator 2024-05-18 21:42:51 +02:00
Michael Eischer 94e863885c check: move verification of individual pack file to repository 2024-05-18 21:42:50 +02:00
Michael Eischer e40943a75d restic: remove backend usage from lock test 2024-05-18 21:38:31 +02:00
Michael Eischer 67e2ba0d40 repository: Lock requires *repository.Repository
This allows the Lock function to access the backend, even once the
Backend method is removed from the interface.
2024-05-18 21:38:31 +02:00
Michael Eischer d8b184b3d3 repository: convert test helper to return *repository.Repository 2024-05-18 21:38:31 +02:00
Michael Eischer a1ca5e15c4 migrations: add temporary hack for s3_layout
The migration will be removed after the next restic release anyways.
Thus, there's no need for a clean implementation.
2024-05-18 21:38:31 +02:00
Michael Eischer 34d90aecf9 migrations: move logic of upgrade_repo_v2 to repository package
The migration modifies repository internals and thus should live within
the repository package.
2024-05-18 21:38:31 +02:00
Michael Eischer ab9077bc13 replace usages of backend.Remove() with repository.RemoveUnpacked()
RemoveUnpacked will eventually block removal of all filetypes other than
snapshots. However, getting there requires a major refactor to provide
some components with privileged access.
2024-05-18 21:38:31 +02:00
Michael Eischer 8274f5b101 prune: remove Backend.IsNotExist()
Only handling one specific error is not particularly useful.
2024-05-18 21:38:31 +02:00
Michael Eischer 9795198189 debug: remove Backend.Stat() usage 2024-05-18 21:38:31 +02:00
Michael Eischer 0c1ba6d95d backend: remove unused Location method 2024-05-18 21:38:31 +02:00
Michael Eischer eb6c653f89
Merge pull request #4800 from MichaelEischer/cleanup-load
Retry loading of corrupted data from backend / cache
2024-05-18 21:34:54 +02:00
Michael Eischer 74d90653e0 check: use ReadFull to load pack header in checkPack
This ensures that the pack header is actually read completely.
Previously, for a truncated file it was possible to only read a part of
the header, as backend.Load(...) is not guaranteed to return as many
bytes as requested by the length parameter.
2024-05-18 21:28:54 +02:00
Michael Eischer 8f8d872a68 fix compatibility with go 1.19 2024-05-18 21:28:54 +02:00
Michael Eischer ff0744b3af check: test checkPack retries 2024-05-18 21:28:54 +02:00
Michael Eischer 987c3b250c repository: test retries of ListPack 2024-05-18 21:28:54 +02:00
Michael Eischer bf16096771 repository: test LoadBlob retries 2024-05-18 21:28:54 +02:00
Michael Eischer 4f45668b7c repository: rework and extend LoadRaw tests 2024-05-18 21:28:54 +02:00
Michael Eischer ac805d6838 cache: cleanup debug logs 2024-05-18 21:28:54 +02:00
Michael Eischer 5214af88e2 cache: test forget behavior 2024-05-18 21:28:54 +02:00
Michael Eischer 3ff063e913 check: verify pack a second time if broken 2024-05-18 21:28:54 +02:00
Michael Eischer 385cee09dc repository: fix caching of tree packs in LoadBlobsFromPack 2024-05-18 21:28:54 +02:00
Michael Eischer e734746f75 cache: forget cached file at most once
This is inspired by the circuit breaker pattern used for distributed
systems. If too many requests fails, then it is better to immediately
fail new requests for a limited time to give the backend time to
recover.

By only forgetting a file in the cache at most once, we can ensure that
a broken file is only retrieved once again from the backend. If the file
stored there is broken, previously it would be cached and deleted
continuously. Now, it is retrieved only once again, all later requests
just use the cached copy and either succeed or fail immediately.
2024-05-18 21:28:54 +02:00
Michael Eischer 97a307df1a cache: Always use cached file if it exists
A file is always cached whole. Thus, any out of bounds access will also
fail when directed at the backend. To handle case in which the cached
file is broken, then caller must call Cache.Forget(h) for the file in
question.
2024-05-18 21:28:54 +02:00
Michael Eischer 8cce06d915 repair packs: drop experimental warning
This warning should already have been removed once the feature flag was
dropped.
2024-05-18 21:28:54 +02:00
Michael Eischer 433a6aad29 repository: remove redundant blob loading fallback from RepairPacks
LoadBlobsFromPack already implements the same fallback behavior.
2024-05-18 21:28:54 +02:00
Michael Eischer e401af07b2 check: fix error message formatting 2024-05-18 21:28:54 +02:00
Michael Eischer 7017adb7e9 repository: retry failed ListPack once 2024-05-18 21:28:54 +02:00
Michael Eischer e33ce7f408 repository: retry failed LoadBlob once 2024-05-18 21:28:54 +02:00
Michael Eischer 2ace242f36 repository: make reloading broken files explicit 2024-05-18 21:28:54 +02:00
Michael Eischer e9390352a7 cache: code cleanups 2024-05-18 21:26:00 +02:00
Michael Eischer 503c8140b1 repository: unify blob decoding code 2024-05-18 21:26:00 +02:00
Michael Eischer 6563f1d2ca repository: remove redundant debug log 2024-05-18 21:26:00 +02:00
Michael Eischer 021fb49559 repository: Implement repository.LoadUnpacked using LoadRaw
Both functions were using a similar implementation.
2024-05-18 21:26:00 +02:00
Michael Eischer 779c8d3527 debug/repair packs/upgrade repo v2: use repository.LoadRaw
This replaces calling the low-level backend.Load() method.
2024-05-18 21:26:00 +02:00
Michael Eischer 1d6d3656b0 repository: move backend.LoadAll to repository.LoadRaw
LoadRaw also includes improved context cancellation handling similar to the
implementation in repository.LoadUnpacked.

The removed cache backend test will be added again later on.
2024-05-18 21:26:00 +02:00
Michael Eischer 47232bf8b0 backend: move LimitReadCloser to util package
The helper is only intended for usage by backend implementations.
2024-05-18 21:26:00 +02:00
Michael Eischer dcd151147c
Merge pull request #4803 from restic/permanent-retry-failure
Do not retry permanent backend failures
2024-05-18 20:07:06 +02:00
Michael Eischer 53d15bcd1b retry: add circuit breaker to load method
If a file exhausts its retry attempts, then it is likely not accessible
the next time. Thus, immediately fail all load calls for that file to
avoid useless retries.
2024-05-18 19:59:26 +02:00
Michael Eischer 394c8ca3ed rest/rclone/s3/sftp/swift: move short file detection behind feature gate
These backends tend to use a large variety of server implementations.
Some of those implementations might prove problematic with the new
checks.
2024-05-18 19:59:26 +02:00
Michael Eischer 6328b7e1f5 replace "too small" with "too short" in error messages 2024-05-18 19:59:26 +02:00
Michael Eischer 53561474d9 update changelog with persistent backend error handling 2024-05-18 19:59:26 +02:00
Michael Eischer aeb7eb245c retry: do not retry permanent errors
This is currently gated behind a feature flag as some unexpected
interactions might show up in the wild.
2024-05-18 19:59:26 +02:00
Michael Eischer bf8cc59889 Use generic backend-error-redesign feature flag instead of http-timeouts
An individual flag for each change of the backend error handling would
be too finegrained. Thus, add a generic flag.
2024-05-18 19:54:52 +02:00
Michael Eischer 4740528a0b backend: add tests for IsPermanentError 2024-05-18 19:54:52 +02:00
Michael Eischer 6a85df7297 backend: add IsPermanentError() method to interface 2024-05-18 19:54:52 +02:00
Michael Eischer cfc420664a mem: stricter handling of out of bounds requests 2024-05-18 19:54:52 +02:00
Michael Eischer d40f23e716 azure/b2/gs/s3/swift: adapt cloud backend 2024-05-18 19:54:51 +02:00
Michael Eischer e793c002ec local: stricter handling of short files 2024-05-18 19:54:21 +02:00
Michael Eischer b4895ebd76 rest: rework error reporting and report too short files 2024-05-18 19:54:21 +02:00
Michael Eischer eaa3f81d6b sftp: check for truncated files without an extra backend request 2024-05-18 19:54:21 +02:00
Michael Eischer c6d74458ee sftp: improve handling of too short files 2024-05-18 19:54:21 +02:00
Michael Eischer 7ed560a201
Merge pull request #4796 from MichaelEischer/parallel-dump-load
dump: Parallelize loading large files
2024-05-14 22:35:44 +02:00
Michael Eischer 92221c2a6d
Merge pull request #4708 from zmanda/windows-securitydesc
Back up and restore SecurityDescriptors on Windows
2024-05-12 14:14:39 +00:00
Michael Eischer b5fdb1d637
Merge pull request #4782 from MichaelEischer/fix-sftp-performance
Fix sftp upload performance
2024-05-12 15:28:33 +02:00
Michael Eischer e4f9bce384
Merge pull request #4792 from restic/request-watchdog
backend: enforce that backend HTTP requests make progress
2024-05-09 23:55:30 +02:00
Michael Eischer 3740700ddc add http timeouts to changelog 2024-05-09 23:46:17 +02:00
Michael Eischer ebd01a4675 backend: add tests for watchdogRoundTripper 2024-05-09 23:46:17 +02:00
Michael Eischer 8778670232 backend: cancel stuck http requests
requests that make no upload or download progress within a timeout are
canceled.
2024-05-09 23:46:17 +02:00
Michael Eischer 0987c731ec backend: configure protocol-level connection health checks
This should detect a connection that is stuck for more than 2 minutes.
2024-05-09 23:46:17 +02:00
aneesh-n a4fd1b91e5
Fix review comments
Change lowerPrivileges from bool to atomic.Bool.
Add missing cleanup from upstream go-winio.
Add handling for ERROR_NOT_ALL_ASSIGNED warning.
2024-05-06 16:54:08 -06:00
Michael Eischer e184538ddf dump: add changelog 2024-05-05 12:12:21 +02:00
Michael Eischer 4d55a62ada bloblru: add test for GetOrCompute 2024-05-05 12:00:25 +02:00
Michael Eischer 7cce667f92 fuse: switch to use bloblru.GetOrCompute 2024-05-05 11:38:42 +02:00
Michael Eischer bd03af2feb dump: add GetOrCompute to bloblru cache 2024-05-05 11:38:42 +02:00
Michael Eischer 45509eafc8 dump: load blobs of a file from repository in parallel 2024-05-05 11:38:42 +02:00
Michael Eischer 24c1822220
Merge pull request #4794 from flow-c/master
Update 060_forget.rst
2024-05-04 08:25:06 +00:00
flow-c d4477a5a99
Update 060_forget.rst
Replace deprecated `-1` with `unlimited` in calendar-related `--keep-*` options
2024-05-04 09:32:25 +02:00
Michael Eischer ffe5439149
Merge pull request #4605 from MichaelEischer/better-restorer-error-handling
Rework repository.StreamPacks & better restorer error handling
2024-05-01 16:37:41 +02:00
Michael Eischer 676f0dc60d add changelog 2024-05-01 16:28:57 +02:00
Michael Eischer 1e57057953
Merge pull request #4789 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob-1.3.2
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob from 1.3.1 to 1.3.2
2024-05-01 10:45:47 +00:00
Michael Eischer 1ba0af6993
Merge pull request #4787 from restic/dependabot/go_modules/github.com/klauspost/compress-1.17.8
build(deps): bump github.com/klauspost/compress from 1.17.7 to 1.17.8
2024-05-01 10:44:33 +00:00
Michael Eischer ffc41ae62a
Merge pull request #4786 from restic/dependabot/go_modules/golang.org/x/net-0.24.0
build(deps): bump golang.org/x/net from 0.23.0 to 0.24.0
2024-05-01 10:41:26 +00:00
Michael Eischer 4832c2fbfa
Merge pull request #4790 from restic/dependabot/github_actions/golangci/golangci-lint-action-5
build(deps): bump golangci/golangci-lint-action from 4 to 5
2024-05-01 10:37:37 +00:00
dependabot[bot] 30609ae6b2
build(deps): bump golangci/golangci-lint-action from 4 to 5
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 4 to 5.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-01 01:45:43 +00:00
dependabot[bot] 502e5867a5
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Bumps [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) from 1.3.1 to 1.3.2.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.3.1...sdk/storage/azblob/v1.3.2)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-01 01:02:39 +00:00
dependabot[bot] 18a6d6b408
build(deps): bump github.com/klauspost/compress from 1.17.7 to 1.17.8
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.17.7 to 1.17.8.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Changelog](https://github.com/klauspost/compress/blob/master/.goreleaser.yml)
- [Commits](https://github.com/klauspost/compress/compare/v1.17.7...v1.17.8)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-01 01:02:22 +00:00
dependabot[bot] 3bb88e8307
build(deps): bump golang.org/x/net from 0.23.0 to 0.24.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.23.0 to 0.24.0.
- [Commits](https://github.com/golang/net/compare/v0.23.0...v0.24.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-01 01:02:11 +00:00
aneesh-n 672f6cd776
Fix review comments for privileges and security flags 2024-04-29 17:29:51 -06:00
aneesh-n 08c6945d61
Fix review comments 2024-04-29 16:21:38 -06:00
Aneesh N 3f76b902e5
Merge branch 'master' into windows-securitydesc 2024-04-29 14:40:34 -06:00
Michael Eischer ccac7c7fb3
Merge pull request #3067 from DRON-666/vss-options
Add options to fine tune VSS snapshots
2024-04-29 18:09:47 +00:00
DRON-666 ccd35565ee
s/sec./seconds 2024-04-29 01:48:22 +03:00
DRON-666 125dba23c5 Rearange code 2024-04-29 01:27:34 +03:00
DRON-666 7ee889bb0d Use S_FALSE and MaxInt 2024-04-29 01:25:25 +03:00
DRON-666 90b168eb6c isMountPointExcluded to isMountPointIncluded 2024-04-29 01:23:50 +03:00
DRON-666 24330c19a8 Use kebab case in option names 2024-04-29 01:21:33 +03:00
DRON-666 5703e5a652 Fix texts and comments 2024-04-29 01:18:46 +03:00
DRON-666 0a8f9c5d9c vss: Add tests for "provider" option 2024-04-28 22:45:21 +03:00
DRON-666 739d3243d9 vss: Update docs and changelog 2024-04-28 22:45:21 +03:00
DRON-666 bb0f93ef3d vss: Add "provider" option 2024-04-28 22:45:21 +03:00
DRON-666 3bac1f0135 vss: Fix issues reported by linters 2024-04-28 22:45:21 +03:00
DRON-666 88c509e3e9 vss: Change `ErrorHandler` signature
We don't need `error` here: the only existing implementation
of `ErrorHandler` always call `Backup.Error` and all
implementations of `Backup.Error` always return nil.
2024-04-28 22:44:16 +03:00
DRON-666 9d3d915e2c vss: Add some tests 2024-04-28 22:44:16 +03:00
DRON-666 9182e6bab5 vss: Update docs and changelog 2024-04-28 22:44:16 +03:00
DRON-666 c4f67c0064 vss: Add volume filtering
Add options to exclude all mountpoints and arbitrary volumes from snapshotting.
2024-04-28 22:44:15 +03:00
DRON-666 7470e5356e vss: Add "timeout" option
Changing multiple "callAsyncFunctionAndWait" with fixed timeout
to calculated timeout based on deadline.
2024-04-28 22:44:15 +03:00
DRON-666 78dbc5ec58 vss: Add initial support for extended options 2024-04-28 22:44:15 +03:00
Michael Eischer a1d682ce0e add changelog for sftp performance fix 2024-04-28 11:58:08 +02:00
Michael Eischer 935327d480 sftp: slightly increase write concurrency
This should increase upload throughput for high latency links a bit.
2024-04-28 11:50:09 +02:00
Michael Eischer 669a669603 sftp: Fix upload performance issue
Since pkg/sftp 1.13.0 files were uploaded sequentially using 32kb chunks
instead of sending 64 chunks in parallel.
2024-04-28 11:48:26 +02:00
Michael Eischer 20d8eed400 repository: streamPack: separate requests for gap larger than 1MB
With most cloud providers, traffic is much more expensive than API
calls. Thus slightly bias streamPack towards a bit more API calls in
exchange for slightly less traffic.
2024-04-22 21:21:23 +02:00
Michael Eischer cf700d8794 repository: streamPack: reuse zstd decoder 2024-04-22 21:21:23 +02:00
Michael Eischer 666a0b0bdb repository: streamPack: replace streaming with chunked download
Due to the interface of streamPack, we cannot guarantee that operations
progress fast enough that the underlying connections remains open. This
introduces partial failures which massively complicate the error
handling.

Switch to a simpler approach that retrieves the pack in chunks of 32MB.
If a blob is larger than this limit, then it is downloaded separately.

To avoid multiple copies in memory, an auxiliary interface
`discardReader` is introduced that allows directly accessing the
downloaded byte slices, while still supporting the streaming used by the
`check` command.
2024-04-22 21:21:23 +02:00
Michael Eischer 621012dac0 repository: Add blob loading fallback to LoadBlobsFromPack
Try to retrieve individual blobs via LoadBlob if streaming did not work.
2024-04-21 21:35:55 +02:00
Aneesh Nireshwalia 062d408987
Clean up SecurityDescriptor helper 2024-02-24 14:23:04 -07:00
Aneesh Nireshwalia 5764300022
Add changelog and fix lint error 2024-02-24 13:47:49 -07:00
Aneesh Nireshwalia c0a1b9ada5
Update docs for security descriptors 2024-02-24 13:28:18 -07:00
Aneesh Nireshwalia 90916f53de
Add test cases for security descriptors 2024-02-24 13:27:01 -07:00
Aneesh Nireshwalia 70cf8e3788
Add support for backup/restore of security descriptors 2024-02-24 13:25:28 -07:00
Aneesh Nireshwalia e3e59fef24
Fix CombineErrors and fillExtendedAttr error handling 2024-02-24 13:22:34 -07:00
Aneesh Nireshwalia 09ce1b4e58
Create helper for SecurityDescriptor related functions 2024-02-24 13:16:25 -07:00
107 changed files with 3962 additions and 1434 deletions

View File

@ -261,7 +261,7 @@ jobs:
uses: actions/checkout@v4
- name: golangci-lint
uses: golangci/golangci-lint-action@v4
uses: golangci/golangci-lint-action@v5
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.57.1

View File

@ -0,0 +1,7 @@
Bugfix: Fix slow sftp upload performance
Since restic 0.12.1, the upload speed of the sftp backend to a remote server
has regressed significantly. This has been fixed.
https://github.com/restic/restic/issues/4209
https://github.com/restic/restic/pull/4782

View File

@ -1,8 +0,0 @@
Change: Don't retry to load files that don't exist
Restic used to always retry to load files. It now only retries to load
files if they exist.
https://github.com/restic/restic/issues/4515
https://github.com/restic/restic/issues/1523
https://github.com/restic/restic/pull/4520

View File

@ -0,0 +1,25 @@
Change: Redesign backend error handling to improve reliability
Restic now downloads pack files in large chunks instead of using a streaming
download. This prevents failures due to interrupted streams. The `restore`
command now also retries downloading individual blobs that cannot be retrieved.
HTTP requests that are stuck for more than two minutes while uploading or
downloading are now forcibly interrupted. This ensures that stuck requests are
retried after a short timeout.
Attempts to access a missing file or a truncated file will no longer be retried.
This avoids unnecessary retries in those cases.
Most parts of the new backend error handling can temporarily be disabled by
setting the environment variable
`RESTIC_FEATURES=backend-error-redesign=false`. Note that this feature flag will
be removed in the next minor restic version.
https://github.com/restic/restic/issues/4627
https://github.com/restic/restic/issues/4193
https://github.com/restic/restic/pull/4605
https://github.com/restic/restic/pull/4792
https://github.com/restic/restic/issues/4515
https://github.com/restic/restic/issues/1523
https://github.com/restic/restic/pull/4520

View File

@ -0,0 +1,22 @@
Enhancement: Add options to configure Windows Shadow Copy Service
Restic always used 120 seconds timeout and unconditionally created VSS snapshots
for all volume mount points on disk. Now this behavior can be fine-tuned by
new options, like exclude specific volumes and mount points or completely
disable auto snapshotting of volume mount points.
For example:
restic backup --use-fs-snapshot -o vss.timeout=5m -o vss.exclude-all-mount-points=true
changes timeout to five minutes and disable snapshotting of mount points on all volumes, and
restic backup --use-fs-snapshot -o vss.exclude-volumes="d:\;c:\mnt\;\\?\Volume{e2e0315d-9066-4f97-8343-eb5659b35762}"
excludes drive `d:`, mount point `c:\mnt` and specific volume from VSS snapshotting.
restic backup --use-fs-snapshot -o vss.provider={b5946137-7b9f-4925-af80-51abd60b20d5}
uses 'Microsoft Software Shadow Copy provider 1.0' instead of the default provider.
https://github.com/restic/restic/pull/3067

View File

@ -1,7 +1,7 @@
Enhancement: Back up windows created time and file attributes like hidden flag
Restic did not back up windows-specific meta-data like created time and file attributes like hidden flag.
Restic now backs up file created time and file attributes like hidden, readonly and encrypted flag when backing up files and folders on windows.
Restic now backs up file created time and file attributes like hidden, readonly and encrypted flag when backing up files and folders on Windows.
https://github.com/restic/restic/pull/4611

View File

@ -0,0 +1,11 @@
Enhancement: Back up and restore SecurityDescriptors on Windows
Restic now backs up and restores SecurityDescriptors when backing up files and folders
on Windows which includes owner, group, discretionary access control list (DACL),
system access control list (SACL). This requires the user to be a member of backup
operators or the application must be run as admin.
If that is not the case, only the current user's owner, group and DACL will be backed up
and during restore only the DACL of the backed file will be restored while the current
user's owner and group will be set during the restore.
https://github.com/restic/restic/pull/4708

View File

@ -0,0 +1,8 @@
Enhancement: Improve `dump` performance for large files
The `dump` command now retrieves the data chunks for a file in parallel. This
improves the download performance by up to the configured number of parallel
backend connections.
https://github.com/restic/restic/issues/3406
https://github.com/restic/restic/pull/4796

View File

@ -445,7 +445,16 @@ func findParentSnapshot(ctx context.Context, repo restic.ListerLoaderUnpacked, o
}
func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
err := opts.Check(gopts, args)
var vsscfg fs.VSSConfig
var err error
if runtime.GOOS == "windows" {
if vsscfg, err = fs.ParseVSSConfig(gopts.extended); err != nil {
return err
}
}
err = opts.Check(gopts, args)
if err != nil {
return err
}
@ -547,8 +556,8 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
return err
}
errorHandler := func(item string, err error) error {
return progressReporter.Error(item, err)
errorHandler := func(item string, err error) {
_ = progressReporter.Error(item, err)
}
messageHandler := func(msg string, args ...interface{}) {
@ -557,7 +566,7 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
}
}
localVss := fs.NewLocalVss(errorHandler, messageHandler)
localVss := fs.NewLocalVss(errorHandler, messageHandler, vsscfg)
defer localVss.DeleteSnapshots()
targetFS = localVss
}

View File

@ -7,7 +7,6 @@ import (
"github.com/spf13/cobra"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@ -146,9 +145,9 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return nil
case "pack":
h := backend.Handle{Type: restic.PackFile, Name: id.String()}
buf, err := backend.LoadAll(ctx, nil, repo.Backend(), h)
if err != nil {
buf, err := repo.LoadRaw(ctx, restic.PackFile, id)
// allow returning broken pack files
if buf == nil {
return err
}

View File

@ -15,6 +15,7 @@ import (
"github.com/restic/restic/internal/checker"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
)
@ -347,7 +348,7 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
for err := range errChan {
errorsFound = true
Warnf("%v\n", err)
if err, ok := err.(*checker.ErrPackData); ok {
if err, ok := err.(*repository.ErrPackData); ok {
salvagePacks = append(salvagePacks, err.PackID)
}
}

View File

@ -20,7 +20,6 @@ import (
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/index"
@ -316,10 +315,11 @@ func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Reposi
if err != nil {
panic(err)
}
be := repo.Backend()
h := backend.Handle{
Name: packID.String(),
Type: restic.PackFile,
pack, err := repo.LoadRaw(ctx, restic.PackFile, packID)
// allow processing broken pack files
if pack == nil {
return err
}
wg, ctx := errgroup.WithContext(ctx)
@ -331,19 +331,11 @@ func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Reposi
wg.Go(func() error {
for _, blob := range list {
Printf(" loading blob %v at %v (length %v)\n", blob.ID, blob.Offset, blob.Length)
buf := make([]byte, blob.Length)
err := be.Load(ctx, h, int(blob.Length), int64(blob.Offset), func(rd io.Reader) error {
n, err := io.ReadFull(rd, buf)
if err != nil {
return fmt.Errorf("read error after %d bytes: %v", n, err)
}
return nil
})
if err != nil {
Warnf("error read: %v\n", err)
if int(blob.Offset+blob.Length) > len(pack) {
Warnf("skipping truncated blob\n")
continue
}
buf := pack[blob.Offset : blob.Offset+blob.Length]
key := repo.Key()
nonce, plaintext := buf[:key.NonceSize()], buf[key.NonceSize():]
@ -482,20 +474,12 @@ func runDebugExamine(ctx context.Context, gopts GlobalOptions, opts DebugExamine
func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repository, id restic.ID) error {
Printf("examine %v\n", id)
h := backend.Handle{
Type: restic.PackFile,
Name: id.String(),
}
fi, err := repo.Backend().Stat(ctx, h)
if err != nil {
return err
}
Printf(" file size is %v\n", fi.Size)
buf, err := backend.LoadAll(ctx, nil, repo.Backend(), h)
if err != nil {
buf, err := repo.LoadRaw(ctx, restic.PackFile, id)
// also process damaged pack files
if buf == nil {
return err
}
Printf(" file size is %v\n", len(buf))
gotID := restic.Hash(buf)
if !id.Equal(gotID) {
Printf(" wanted hash %v, got %v\n", id, gotID)
@ -514,7 +498,7 @@ func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repo
continue
}
checkPackSize(blobs, fi.Size)
checkPackSize(blobs, len(buf))
err = loadBlobs(ctx, opts, repo, id, blobs)
if err != nil {
@ -527,11 +511,11 @@ func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repo
Printf(" ========================================\n")
Printf(" inspect the pack itself\n")
blobs, _, err := repo.ListPack(ctx, id, fi.Size)
blobs, _, err := repo.ListPack(ctx, id, int64(len(buf)))
if err != nil {
return fmt.Errorf("pack %v: %v", id.Str(), err)
}
checkPackSize(blobs, fi.Size)
checkPackSize(blobs, len(buf))
if !blobsLoaded {
return loadBlobs(ctx, opts, repo, id, blobs)
@ -539,7 +523,7 @@ func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repo
return nil
}
func checkPackSize(blobs []restic.Blob, fileSize int64) {
func checkPackSize(blobs []restic.Blob, fileSize int) {
// track current size and offset
var size, offset uint64

View File

@ -285,10 +285,6 @@ func getUsedBlobs(ctx context.Context, repo restic.Repository, ignoreSnapshots r
err = restic.FindUsedBlobs(ctx, repo, snapshotTrees, usedBlobs, bar)
if err != nil {
if repo.Backend().IsNotExist(err) {
return nil, errors.Fatal("unable to load a tree from the repository: " + err.Error())
}
return nil, err
}
return usedBlobs, nil

View File

@ -1,11 +1,11 @@
package main
import (
"bytes"
"context"
"io"
"os"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@ -17,8 +17,6 @@ var cmdRepairPacks = &cobra.Command{
Use: "packs [packIDs...]",
Short: "Salvage damaged pack files",
Long: `
WARNING: The CLI for this command is experimental and will likely change in the future!
The "repair packs" command extracts intact blobs from the specified pack files, rebuilds
the index to remove the damaged pack files and removes the pack files from the repository.
@ -68,20 +66,17 @@ func runRepairPacks(ctx context.Context, gopts GlobalOptions, term *termstatus.T
printer.P("saving backup copies of pack files to current folder")
for id := range ids {
buf, err := repo.LoadRaw(ctx, restic.PackFile, id)
// corrupted data is fine
if buf == nil {
return err
}
f, err := os.OpenFile("pack-"+id.String(), os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0o666)
if err != nil {
return err
}
err = repo.Backend().Load(ctx, backend.Handle{Type: restic.PackFile, Name: id.String()}, 0, 0, func(rd io.Reader) error {
_, err := f.Seek(0, 0)
if err != nil {
return err
}
_, err = io.Copy(f, rd)
return err
})
if err != nil {
if _, err := io.Copy(f, bytes.NewReader(buf)); err != nil {
_ = f.Close()
return err
}

View File

@ -8,7 +8,6 @@ import (
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
@ -181,8 +180,7 @@ func filterAndReplaceSnapshot(ctx context.Context, repo restic.Repository, sn *r
if dryRun {
Verbosef("would delete empty snapshot\n")
} else {
h := backend.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
if err = repo.Backend().Remove(ctx, h); err != nil {
if err = repo.RemoveUnpacked(ctx, restic.SnapshotFile, *sn.ID()); err != nil {
return false, err
}
debug.Log("removed empty snapshot %v", sn.ID())
@ -241,8 +239,7 @@ func filterAndReplaceSnapshot(ctx context.Context, repo restic.Repository, sn *r
Verbosef("saved new snapshot %v\n", id.Str())
if forget {
h := backend.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
if err = repo.Backend().Remove(ctx, h); err != nil {
if err = repo.RemoveUnpacked(ctx, restic.SnapshotFile, *sn.ID()); err != nil {
return false, err
}
debug.Log("removed old snapshot %v", sn.ID())

View File

@ -5,7 +5,6 @@ import (
"github.com/spf13/cobra"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
@ -86,8 +85,7 @@ func changeTags(ctx context.Context, repo *repository.Repository, sn *restic.Sna
debug.Log("new snapshot saved as %v", id)
// Remove the old snapshot.
h := backend.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
if err = repo.Backend().Remove(ctx, h); err != nil {
if err = repo.RemoveUnpacked(ctx, restic.SnapshotFile, *sn.ID()); err != nil {
return false, err
}

View File

@ -416,12 +416,16 @@ func OpenRepository(ctx context.Context, opts GlobalOptions) (*repository.Reposi
}
report := func(msg string, err error, d time.Duration) {
Warnf("%v returned error, retrying after %v: %v\n", msg, d, err)
if d < 0 {
Warnf("%v returned error, retrying after %v: %v\n", msg, d, err)
} else {
Warnf("%v failed: %v\n", msg, err)
}
}
success := func(msg string, retries int) {
Warnf("%v operation successful after %d retries\n", msg, retries)
}
be = retry.New(be, 10, report, success)
be = retry.New(be, 15*time.Minute, report, success)
// wrap backend if a test specified a hook
if opts.backendTestHook != nil {

View File

@ -267,7 +267,7 @@ func removePacks(gopts GlobalOptions, t testing.TB, remove restic.IDSet) {
defer unlock()
for id := range remove {
rtest.OK(t, r.Backend().Remove(ctx, backend.Handle{Type: restic.PackFile, Name: id.String()}))
rtest.OK(t, r.RemoveUnpacked(ctx, restic.PackFile, id))
}
}
@ -291,7 +291,7 @@ func removePacksExcept(gopts GlobalOptions, t testing.TB, keep restic.IDSet, rem
if treePacks.Has(id) != removeTreePacks || keep.Has(id) {
return nil
}
return r.Backend().Remove(ctx, backend.Handle{Type: restic.PackFile, Name: id.String()})
return r.RemoveUnpacked(ctx, restic.PackFile, id)
}))
}

View File

@ -56,6 +56,39 @@ snapshot for each volume that contains files to backup. Files are read from the
VSS snapshot instead of the regular filesystem. This allows to backup files that are
exclusively locked by another process during the backup.
You can use additional options to change VSS behaviour:
* ``-o vss.timeout`` specifies timeout for VSS snapshot creation, the default value is 120 seconds
* ``-o vss.exclude-all-mount-points`` disable auto snapshotting of all volume mount points
* ``-o vss.exclude-volumes`` allows excluding specific volumes or volume mount points from snapshotting
* ``-o vss.provider`` specifies VSS provider used for snapshotting
For example a 2.5 minutes timeout with snapshotting of mount points disabled can be specified as
.. code-block:: console
-o vss.timeout=2m30s -o vss.exclude-all-mount-points=true
and excluding drive ``d:\``, mount point ``c:\mnt`` and volume ``\\?\Volume{04ce0545-3391-11e0-ba2f-806e6f6e6963}\`` as
.. code-block:: console
-o vss.exclude-volumes="d:;c:\mnt\;\\?\volume{04ce0545-3391-11e0-ba2f-806e6f6e6963}"
VSS provider can be specified by GUID
.. code-block:: console
-o vss.provider={3f900f90-00e9-440e-873a-96ca5eb079e5}
or by name
.. code-block:: console
-o vss.provider="Hyper-V IC Software Shadow Copy Provider"
Also ``MS`` can be used as alias for ``Microsoft Software Shadow Copy provider 1.0``.
By default VSS ignores Outlook OST files. This is not a restriction of restic
but the default Windows VSS configuration. The files not to snapshot are
configured in the Windows registry under the following key:
@ -481,12 +514,17 @@ written, and the next backup needs to write new metadata again. If you really
want to save the access time for files and directories, you can pass the
``--with-atime`` option to the ``backup`` command.
Backing up full security descriptors on Windows is only possible when the user
has ``SeBackupPrivilege``privilege or is running as admin. This is a restriction
of Windows not restic.
If either of these conditions are not met, only the owner, group and DACL will
be backed up.
Note that ``restic`` does not back up some metadata associated with files. Of
particular note are:
* File creation date on Unix platforms
* Inode flags on Unix platforms
* File ownership and ACLs on Windows
Reading data from a command
***************************

View File

@ -72,6 +72,11 @@ Restoring symbolic links on windows is only possible when the user has
``SeCreateSymbolicLinkPrivilege`` privilege or is running as admin. This is a
restriction of windows not restic.
Restoring full security descriptors on Windows is only possible when the user has
``SeRestorePrivilege``, ``SeSecurityPrivilege`` and ``SeTakeOwnershipPrivilege``
privilege or is running as admin. This is a restriction of Windows not restic.
If either of these conditions are not met, only the DACL will be restored.
By default, restic does not restore files as sparse. Use ``restore --sparse`` to
enable the creation of sparse files if supported by the filesystem. Then restic
will restore long runs of zero bytes as holes in the corresponding files.

View File

@ -205,7 +205,7 @@ The ``forget`` command accepts the following policy options:
natural time boundaries and *not* relative to when you run ``forget``. Weeks
are Monday 00:00 to Sunday 23:59, days 00:00 to 23:59, hours :00 to :59, etc.
They also only count hours/days/weeks/etc which have one or more snapshots.
A value of ``-1`` will be interpreted as "forever", i.e. "keep all".
A value of ``unlimited`` will be interpreted as "forever", i.e. "keep all".
.. note:: All duration related options (``--keep-{within-,}*``) ignore snapshots
with a timestamp in the future (relative to when the ``forget`` command is

14
go.mod
View File

@ -2,9 +2,9 @@ module github.com/restic/restic
require (
cloud.google.com/go/storage v1.40.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.10.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2
github.com/Backblaze/blazer v0.6.1
github.com/anacrolix/fuse v0.2.0
github.com/cenkalti/backoff/v4 v4.2.1
@ -13,7 +13,7 @@ require (
github.com/go-ole/go-ole v1.3.0
github.com/google/go-cmp v0.6.0
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/klauspost/compress v1.17.7
github.com/klauspost/compress v1.17.8
github.com/minio/minio-go/v7 v7.0.66
github.com/minio/sha256-simd v1.0.1
github.com/ncw/swift/v2 v2.0.2
@ -26,12 +26,12 @@ require (
github.com/spf13/cobra v1.8.0
github.com/spf13/pflag v1.0.5
go.uber.org/automaxprocs v1.5.3
golang.org/x/crypto v0.21.0
golang.org/x/net v0.23.0
golang.org/x/crypto v0.22.0
golang.org/x/net v0.24.0
golang.org/x/oauth2 v0.18.0
golang.org/x/sync v0.6.0
golang.org/x/sys v0.18.0
golang.org/x/term v0.18.0
golang.org/x/sys v0.19.0
golang.org/x/term v0.19.0
golang.org/x/text v0.14.0
golang.org/x/time v0.5.0
google.golang.org/api v0.170.0

28
go.sum
View File

@ -9,15 +9,15 @@ cloud.google.com/go/iam v1.1.7 h1:z4VHOhwKLF/+UYXAJDFwGtNF0b6gjsW1Pk9Ml0U/IoM=
cloud.google.com/go/iam v1.1.7/go.mod h1:J4PMPg8TtyurAUvSmPj8FF3EDgY1SPRZxcUGrn7WXGA=
cloud.google.com/go/storage v1.40.0 h1:VEpDQV5CJxFmJ6ueWNsKxcr1QAYOXEgxDa+sBbJahPw=
cloud.google.com/go/storage v1.40.0/go.mod h1:Rrj7/hKlG87BLqDJYtwR0fbPld8uJPbQ2ucUMY7Ir0g=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.10.0 h1:n1DH8TPV4qqPTje2RcUBYwtrTWlabVp4n46+74X2pn4=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.10.0/go.mod h1:HDcZnuGbiyppErN6lB+idp4CKhjbc8gwjto6OPpyggM=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1 h1:E+OJmp2tPvt1W+amx48v1eqbjDYsgN+RzP4q16yV5eM=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1/go.mod h1:a6xsAQUZg+VsS3TJ05SRp524Hs4pZ/AeFSr5ENf0Yjo=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1 h1:sO0/P7g68FrryJzljemN+6GTssUXdANk6aJ7T1ZxnsQ=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1/go.mod h1:h8hyGFDsU5HMivxiS2iYFZsgDbU9OnnJ163x5UGVKYo=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 h1:LqbJ/WzJUwBf8UiaSzgX7aMclParm9/5Vgp+TY51uBQ=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2/go.mod h1:yInRyqWXAuaPrgI7p70+lDDgh3mlBohis29jGMISnmc=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0 h1:AifHbc4mg0x9zW52WOpKbsHaDKuRhlI7TVl47thgQ70=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.1 h1:fXPMAmuh0gDuRDey0atC8cXBuKIlqCzCkL8sm1n9Ov0=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.1/go.mod h1:SUZc9YRRHfx2+FAQKNDGrssXehqLpxmwRv2mC/5ntj4=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2 h1:YUUxeiOWgdAQE3pXt2H7QXzZs0q8UBjgRbl56qo8GYM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2/go.mod h1:dmXQgZuiSubAecswZE+Sm8jkvEa7kQgTPVRvwL/nd0E=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1 h1:DzHpqpoJVaCgOUdVHxE8QB52S6NiVdDQvGlny1qvPqA=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/Backblaze/blazer v0.6.1 h1:xC9HyC7OcxRzzmtfRiikIEvq4HZYWjU6caFwX2EXw1s=
@ -114,8 +114,8 @@ github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/klauspost/compress v1.17.7 h1:ehO88t2UGzQK66LMdE8tibEd1ErmzZjNEqWkjLAKQQg=
github.com/klauspost/compress v1.17.7/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.17.8 h1:YcnTYrq7MikUT7k0Yb5eceMmALQPYBW/Xltxn0NAMnU=
github.com/klauspost/compress v1.17.8/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.6 h1:ndNyv040zDGIDh8thGkXYjnFtiN02M1PVVF+JE/48xc=
github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
@ -206,8 +206,8 @@ golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.22.0 h1:g1v0xeRhjcugydODzvb3mEM9SQ0HGp9s/nh3COQ/C30=
golang.org/x/crypto v0.22.0/go.mod h1:vr6Su+7cTlO45qkww3VDJlzDn0ctJvRgYbC2NvXHt+M=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@ -227,8 +227,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.24.0 h1:1PcaxkF854Fu3+lvBIx5SYn9wRlBzzcnHZSiaFFAb0w=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI=
golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8=
@ -255,14 +255,14 @@ golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.19.0 h1:q5f1RH2jigJ1MoAWp2KTp3gm5zAGFUTarQZ5U386+4o=
golang.org/x/sys v0.19.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/term v0.19.0 h1:+ThwsDv+tYfnJFhF4L8jITxu1tdTWRTZpdsWgEgjL6Q=
golang.org/x/term v0.19.0/go.mod h1:2CuTdWZ7KHSQwUzKva0cbMg6q2DMI3Mmxp+gKJbskEk=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=

View File

@ -1970,7 +1970,7 @@ func TestArchiverContextCanceled(t *testing.T) {
})
// Ensure that the archiver itself reports the canceled context and not just the backend
repo := repository.TestRepositoryWithBackend(t, &noCancelBackend{mem.New()}, 0, repository.Options{})
repo, _ := repository.TestRepositoryWithBackend(t, &noCancelBackend{mem.New()}, 0, repository.Options{})
back := rtest.Chdir(t, tempdir)
defer back()

View File

@ -167,6 +167,20 @@ func (be *Backend) IsNotExist(err error) bool {
return bloberror.HasCode(err, bloberror.BlobNotFound)
}
func (be *Backend) IsPermanentError(err error) bool {
if be.IsNotExist(err) {
return true
}
var aerr *azcore.ResponseError
if errors.As(err, &aerr) {
if aerr.StatusCode == http.StatusRequestedRangeNotSatisfiable || aerr.StatusCode == http.StatusUnauthorized || aerr.StatusCode == http.StatusForbidden {
return true
}
}
return false
}
// Join combines path components with slashes.
func (be *Backend) Join(p ...string) string {
return path.Join(p...)
@ -176,11 +190,6 @@ func (be *Backend) Connections() uint {
return be.connections
}
// Location returns this backend's location (the container name).
func (be *Backend) Location() string {
return be.Join(be.cfg.AccountName, be.cfg.Prefix)
}
// Hasher may return a hash function for calculating a content hash for the backend
func (be *Backend) Hasher() hash.Hash {
return md5.New()
@ -313,6 +322,11 @@ func (be *Backend) openReader(ctx context.Context, h backend.Handle, length int,
return nil, err
}
if length > 0 && (resp.ContentLength == nil || *resp.ContentLength != int64(length)) {
_ = resp.Body.Close()
return nil, &azcore.ResponseError{ErrorCode: "restic-file-too-short", StatusCode: http.StatusRequestedRangeNotSatisfiable}
}
return resp.Body, err
}

View File

@ -2,6 +2,7 @@ package b2
import (
"context"
"fmt"
"hash"
"io"
"net/http"
@ -31,6 +32,8 @@ type b2Backend struct {
canDelete bool
}
var errTooShort = fmt.Errorf("file is too short")
// Billing happens in 1000 item granularity, but we are more interested in reducing the number of network round trips
const defaultListMaxItems = 10 * 1000
@ -159,11 +162,6 @@ func (be *b2Backend) Connections() uint {
return be.cfg.Connections
}
// Location returns the location for the backend.
func (be *b2Backend) Location() string {
return be.cfg.Bucket
}
// Hasher may return a hash function for calculating a content hash for the backend
func (be *b2Backend) Hasher() hash.Hash {
return nil
@ -186,13 +184,36 @@ func (be *b2Backend) IsNotExist(err error) bool {
return false
}
func (be *b2Backend) IsPermanentError(err error) bool {
// the library unfortunately endlessly retries authentication errors
return be.IsNotExist(err) || errors.Is(err, errTooShort)
}
// Load runs fn with a reader that yields the contents of the file at h at the
// given offset.
func (be *b2Backend) Load(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
return util.DefaultLoad(ctx, h, length, offset, be.openReader, fn)
return util.DefaultLoad(ctx, h, length, offset, be.openReader, func(rd io.Reader) error {
if length == 0 {
return fn(rd)
}
// there is no direct way to efficiently check whether the file is too short
// use a LimitedReader to track the number of bytes read
limrd := &io.LimitedReader{R: rd, N: int64(length)}
err := fn(limrd)
// check the underlying reader to be agnostic to however fn() handles the returned error
_, rderr := rd.Read([]byte{0})
if rderr == io.EOF && limrd.N != 0 {
// file is too short
return fmt.Errorf("%w: %v", errTooShort, err)
}
return err
})
}
func (be *b2Backend) openReader(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {

View File

@ -14,10 +14,6 @@ import (
// the context package need not be wrapped, as context cancellation is checked
// separately by the retrying logic.
type Backend interface {
// Location returns a string that describes the type and location of the
// repository.
Location() string
// Connections returns the maximum number of concurrent backend operations.
Connections() uint
@ -38,7 +34,9 @@ type Backend interface {
// Load runs fn with a reader that yields the contents of the file at h at the
// given offset. If length is larger than zero, only a portion of the file
// is read.
// is read. If the length is larger than zero and the file is too short to return
// the requested length bytes, then an error MUST be returned that is recognized
// by IsPermanentError().
//
// The function fn may be called multiple times during the same Load invocation
// and therefore must be idempotent.
@ -66,6 +64,12 @@ type Backend interface {
// for unwrapping it.
IsNotExist(err error) bool
// IsPermanentError returns true if the error can very likely not be resolved
// by retrying the operation. Backends should return true if the file is missing,
// the requested range does not (completely) exist in the file or the user is
// not authorized to perform the requested operation.
IsPermanentError(err error) bool
// Delete removes all data in the backend.
Delete(ctx context.Context) error
}

View File

@ -46,11 +46,6 @@ func (be *Backend) Connections() uint {
return be.b.Connections()
}
// Location returns the location of the backend.
func (be *Backend) Location() string {
return "DRY:" + be.b.Location()
}
// Delete removes all data in the backend.
func (be *Backend) Delete(_ context.Context) error {
return nil
@ -72,6 +67,10 @@ func (be *Backend) IsNotExist(err error) bool {
return be.b.IsNotExist(err)
}
func (be *Backend) IsPermanentError(err error) bool {
return be.b.IsPermanentError(err)
}
func (be *Backend) List(ctx context.Context, t backend.FileType, fn func(backend.FileInfo) error) error {
return be.b.List(ctx, t, fn)
}

View File

@ -36,7 +36,6 @@ func TestDry(t *testing.T) {
content string
wantErr string
}{
{d, "loc", "", "DRY:RAM", ""},
{d, "delete", "", "", ""},
{d, "stat", "a", "", "not found"},
{d, "list", "", "", ""},
@ -76,11 +75,6 @@ func TestDry(t *testing.T) {
if files != step.content {
t.Errorf("%d. List = %q, want %q", i, files, step.content)
}
case "loc":
loc := step.be.Location()
if loc != step.content {
t.Errorf("%d. Location = %q, want %q", i, loc, step.content)
}
case "delete":
err = step.be.Delete(ctx)
case "remove":
@ -96,7 +90,7 @@ func TestDry(t *testing.T) {
}
case "load":
data := ""
err = step.be.Load(ctx, handle, 100, 0, func(rd io.Reader) error {
err = step.be.Load(ctx, handle, 0, 0, func(rd io.Reader) error {
buf, err := io.ReadAll(rd)
data = string(buf)
return err

View File

@ -173,6 +173,21 @@ func (be *Backend) IsNotExist(err error) bool {
return errors.Is(err, storage.ErrObjectNotExist)
}
func (be *Backend) IsPermanentError(err error) bool {
if be.IsNotExist(err) {
return true
}
var gerr *googleapi.Error
if errors.As(err, &gerr) {
if gerr.Code == http.StatusRequestedRangeNotSatisfiable || gerr.Code == http.StatusUnauthorized || gerr.Code == http.StatusForbidden {
return true
}
}
return false
}
// Join combines path components with slashes.
func (be *Backend) Join(p ...string) string {
return path.Join(p...)
@ -182,11 +197,6 @@ func (be *Backend) Connections() uint {
return be.connections
}
// Location returns this backend's location (the bucket name).
func (be *Backend) Location() string {
return be.Join(be.bucketName, be.prefix)
}
// Hasher may return a hash function for calculating a content hash for the backend
func (be *Backend) Hasher() hash.Hash {
return md5.New()
@ -273,6 +283,11 @@ func (be *Backend) openReader(ctx context.Context, h backend.Handle, length int,
return nil, err
}
if length > 0 && r.Attrs.Size < offset+int64(length) {
_ = r.Close()
return nil, &googleapi.Error{Code: http.StatusRequestedRangeNotSatisfiable, Message: "restic-file-too-short"}
}
return r, err
}

View File

@ -13,6 +13,8 @@ import (
"github.com/peterbourgon/unixtransport"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/feature"
"golang.org/x/net/http2"
)
// TransportOptions collects various options which can be set for an HTTP based
@ -74,7 +76,6 @@ func Transport(opts TransportOptions) (http.RoundTripper, error) {
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
ForceAttemptHTTP2: true,
MaxIdleConns: 100,
MaxIdleConnsPerHost: 100,
IdleConnTimeout: 90 * time.Second,
@ -83,6 +84,17 @@ func Transport(opts TransportOptions) (http.RoundTripper, error) {
TLSClientConfig: &tls.Config{},
}
// ensure that http2 connections are closed if they are broken
h2, err := http2.ConfigureTransports(tr)
if err != nil {
panic(err)
}
if feature.Flag.Enabled(feature.BackendErrorRedesign) {
h2.WriteByteTimeout = 120 * time.Second
h2.ReadIdleTimeout = 60 * time.Second
h2.PingTimeout = 60 * time.Second
}
unixtransport.Register(tr)
if opts.InsecureTLS {
@ -119,6 +131,11 @@ func Transport(opts TransportOptions) (http.RoundTripper, error) {
tr.TLSClientConfig.RootCAs = pool
}
rt := http.RoundTripper(tr)
if feature.Flag.Enabled(feature.BackendErrorRedesign) {
rt = newWatchdogRoundtripper(rt, 120*time.Second, 128*1024)
}
// wrap in the debug round tripper (if active)
return debug.RoundTripper(tr), nil
return debug.RoundTripper(rt), nil
}

View File

@ -2,6 +2,7 @@ package local
import (
"context"
"fmt"
"hash"
"io"
"os"
@ -30,6 +31,8 @@ type Local struct {
// ensure statically that *Local implements backend.Backend.
var _ backend.Backend = &Local{}
var errTooShort = fmt.Errorf("file is too short")
func NewFactory() location.Factory {
return location.NewLimitedBackendFactory("local", ParseConfig, location.NoPassword, limiter.WrapBackendConstructor(Create), limiter.WrapBackendConstructor(Open))
}
@ -90,11 +93,6 @@ func (b *Local) Connections() uint {
return b.Config.Connections
}
// Location returns this backend's location (the directory name).
func (b *Local) Location() string {
return b.Path
}
// Hasher may return a hash function for calculating a content hash for the backend
func (b *Local) Hasher() hash.Hash {
return nil
@ -110,6 +108,10 @@ func (b *Local) IsNotExist(err error) bool {
return errors.Is(err, os.ErrNotExist)
}
func (b *Local) IsPermanentError(err error) bool {
return b.IsNotExist(err) || errors.Is(err, errTooShort) || errors.Is(err, os.ErrPermission)
}
// Save stores data in the backend at the handle.
func (b *Local) Save(_ context.Context, h backend.Handle, rd backend.RewindReader) (err error) {
finalname := b.Filename(h)
@ -219,6 +221,18 @@ func (b *Local) openReader(_ context.Context, h backend.Handle, length int, offs
return nil, err
}
fi, err := f.Stat()
if err != nil {
_ = f.Close()
return nil, err
}
size := fi.Size()
if size < offset+int64(length) {
_ = f.Close()
return nil, errTooShort
}
if offset > 0 {
_, err = f.Seek(offset, 0)
if err != nil {
@ -228,7 +242,7 @@ func (b *Local) openReader(_ context.Context, h backend.Handle, length int, offs
}
if length > 0 {
return backend.LimitReadCloser(f, int64(length)), nil
return util.LimitReadCloser(f, int64(length)), nil
}
return f, nil

View File

@ -43,6 +43,7 @@ func NewFactory() location.Factory {
}
var errNotFound = fmt.Errorf("not found")
var errTooSmall = errors.New("access beyond end of file")
const connectionCount = 2
@ -69,6 +70,10 @@ func (be *MemoryBackend) IsNotExist(err error) bool {
return errors.Is(err, errNotFound)
}
func (be *MemoryBackend) IsPermanentError(err error) bool {
return be.IsNotExist(err) || errors.Is(err, errTooSmall)
}
// Save adds new Data to the backend.
func (be *MemoryBackend) Save(ctx context.Context, h backend.Handle, rd backend.RewindReader) error {
be.m.Lock()
@ -131,12 +136,12 @@ func (be *MemoryBackend) openReader(ctx context.Context, h backend.Handle, lengt
}
buf := be.data[h]
if offset > int64(len(buf)) {
return nil, errors.New("offset beyond end of file")
if offset+int64(length) > int64(len(buf)) {
return nil, errTooSmall
}
buf = buf[offset:]
if length > 0 && len(buf) > length {
if length > 0 {
buf = buf[:length]
}
@ -217,11 +222,6 @@ func (be *MemoryBackend) Connections() uint {
return connectionCount
}
// Location returns the location of the backend (RAM).
func (be *MemoryBackend) Location() string {
return "RAM"
}
// Hasher may return a hash function for calculating a content hash for the backend
func (be *MemoryBackend) Hasher() hash.Hash {
return xxhash.New()

View File

@ -13,6 +13,7 @@ import (
type Backend struct {
CloseFn func() error
IsNotExistFn func(err error) bool
IsPermanentErrorFn func(err error) bool
SaveFn func(ctx context.Context, h backend.Handle, rd backend.RewindReader) error
OpenReaderFn func(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error)
StatFn func(ctx context.Context, h backend.Handle) (backend.FileInfo, error)
@ -20,7 +21,6 @@ type Backend struct {
RemoveFn func(ctx context.Context, h backend.Handle) error
DeleteFn func(ctx context.Context) error
ConnectionsFn func() uint
LocationFn func() string
HasherFn func() hash.Hash
HasAtomicReplaceFn func() bool
}
@ -48,15 +48,6 @@ func (m *Backend) Connections() uint {
return m.ConnectionsFn()
}
// Location returns a location string.
func (m *Backend) Location() string {
if m.LocationFn == nil {
return ""
}
return m.LocationFn()
}
// Hasher may return a hash function for calculating a content hash for the backend
func (m *Backend) Hasher() hash.Hash {
if m.HasherFn == nil {
@ -83,6 +74,14 @@ func (m *Backend) IsNotExist(err error) bool {
return m.IsNotExistFn(err)
}
func (m *Backend) IsPermanentError(err error) bool {
if m.IsPermanentErrorFn == nil {
return false
}
return m.IsPermanentErrorFn(err)
}
// Save data in the backend.
func (m *Backend) Save(ctx context.Context, h backend.Handle, rd backend.RewindReader) error {
if m.SaveFn == nil {

View File

@ -17,6 +17,7 @@ import (
"github.com/restic/restic/internal/backend/util"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/feature"
)
// make sure the rest backend implements backend.Backend
@ -30,6 +31,20 @@ type Backend struct {
layout.Layout
}
// restError is returned whenever the server returns a non-successful HTTP status.
type restError struct {
backend.Handle
StatusCode int
Status string
}
func (e *restError) Error() string {
if e.StatusCode == http.StatusNotFound && e.Handle.Type.String() != "invalid" {
return fmt.Sprintf("%v does not exist", e.Handle)
}
return fmt.Sprintf("unexpected HTTP response (%v): %v", e.StatusCode, e.Status)
}
func NewFactory() location.Factory {
return location.NewHTTPBackendFactory("rest", ParseConfig, StripPassword, Create, Open)
}
@ -96,7 +111,7 @@ func Create(ctx context.Context, cfg Config, rt http.RoundTripper) (*Backend, er
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("server response unexpected: %v (%v)", resp.Status, resp.StatusCode)
return nil, &restError{backend.Handle{}, resp.StatusCode, resp.Status}
}
return be, nil
@ -106,11 +121,6 @@ func (b *Backend) Connections() uint {
return b.connections
}
// Location returns this backend's location (the server's URL).
func (b *Backend) Location() string {
return b.url.String()
}
// Hasher may return a hash function for calculating a content hash for the backend
func (b *Backend) Hasher() hash.Hash {
return nil
@ -150,26 +160,31 @@ func (b *Backend) Save(ctx context.Context, h backend.Handle, rd backend.RewindR
}
if resp.StatusCode != http.StatusOK {
return errors.Errorf("server response unexpected: %v (%v)", resp.Status, resp.StatusCode)
return &restError{h, resp.StatusCode, resp.Status}
}
return nil
}
// notExistError is returned whenever the requested file does not exist on the
// server.
type notExistError struct {
backend.Handle
}
func (e *notExistError) Error() string {
return fmt.Sprintf("%v does not exist", e.Handle)
}
// IsNotExist returns true if the error was caused by a non-existing file.
func (b *Backend) IsNotExist(err error) bool {
var e *notExistError
return errors.As(err, &e)
var e *restError
return errors.As(err, &e) && e.StatusCode == http.StatusNotFound
}
func (b *Backend) IsPermanentError(err error) bool {
if b.IsNotExist(err) {
return true
}
var rerr *restError
if errors.As(err, &rerr) {
if rerr.StatusCode == http.StatusRequestedRangeNotSatisfiable || rerr.StatusCode == http.StatusUnauthorized || rerr.StatusCode == http.StatusForbidden {
return true
}
}
return false
}
// Load runs fn with a reader that yields the contents of the file at h at the
@ -221,14 +236,13 @@ func (b *Backend) openReader(ctx context.Context, h backend.Handle, length int,
return nil, errors.Wrap(err, "client.Do")
}
if resp.StatusCode == http.StatusNotFound {
_ = drainAndClose(resp)
return nil, &notExistError{h}
}
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {
_ = drainAndClose(resp)
return nil, errors.Errorf("unexpected HTTP response (%v): %v", resp.StatusCode, resp.Status)
return nil, &restError{h, resp.StatusCode, resp.Status}
}
if feature.Flag.Enabled(feature.BackendErrorRedesign) && length > 0 && resp.ContentLength != int64(length) {
return nil, &restError{h, http.StatusRequestedRangeNotSatisfiable, "partial out of bounds read"}
}
return resp.Body, nil
@ -251,12 +265,8 @@ func (b *Backend) Stat(ctx context.Context, h backend.Handle) (backend.FileInfo,
return backend.FileInfo{}, err
}
if resp.StatusCode == http.StatusNotFound {
return backend.FileInfo{}, &notExistError{h}
}
if resp.StatusCode != http.StatusOK {
return backend.FileInfo{}, errors.Errorf("unexpected HTTP response (%v): %v", resp.StatusCode, resp.Status)
return backend.FileInfo{}, &restError{h, resp.StatusCode, resp.Status}
}
if resp.ContentLength < 0 {
@ -288,12 +298,8 @@ func (b *Backend) Remove(ctx context.Context, h backend.Handle) error {
return err
}
if resp.StatusCode == http.StatusNotFound {
return &notExistError{h}
}
if resp.StatusCode != http.StatusOK {
return errors.Errorf("blob not removed, server response: %v (%v)", resp.Status, resp.StatusCode)
return &restError{h, resp.StatusCode, resp.Status}
}
return nil
@ -330,7 +336,7 @@ func (b *Backend) List(ctx context.Context, t backend.FileType, fn func(backend.
if resp.StatusCode != http.StatusOK {
_ = drainAndClose(resp)
return errors.Errorf("List failed, server response: %v (%v)", resp.Status, resp.StatusCode)
return &restError{backend.Handle{Type: t}, resp.StatusCode, resp.Status}
}
if resp.Header.Get("Content-Type") == ContentTypeV2 {

View File

@ -2,22 +2,27 @@ package retry
import (
"context"
"errors"
"fmt"
"io"
"sync"
"time"
"github.com/cenkalti/backoff/v4"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/feature"
)
// Backend retries operations on the backend in case of an error with a
// backoff.
type Backend struct {
backend.Backend
MaxTries int
Report func(string, error, time.Duration)
Success func(string, int)
MaxElapsedTime time.Duration
Report func(string, error, time.Duration)
Success func(string, int)
failedLoads sync.Map
}
// statically ensure that RetryBackend implements backend.Backend.
@ -27,32 +32,64 @@ var _ backend.Backend = &Backend{}
// backoff. report is called with a description and the error, if one occurred.
// success is called with the number of retries before a successful operation
// (it is not called if it succeeded on the first try)
func New(be backend.Backend, maxTries int, report func(string, error, time.Duration), success func(string, int)) *Backend {
func New(be backend.Backend, maxElapsedTime time.Duration, report func(string, error, time.Duration), success func(string, int)) *Backend {
return &Backend{
Backend: be,
MaxTries: maxTries,
Report: report,
Success: success,
Backend: be,
MaxElapsedTime: maxElapsedTime,
Report: report,
Success: success,
}
}
// retryNotifyErrorWithSuccess is an extension of backoff.RetryNotify with notification of success after an error.
// success is NOT notified on the first run of operation (only after an error).
func retryNotifyErrorWithSuccess(operation backoff.Operation, b backoff.BackOff, notify backoff.Notify, success func(retries int)) error {
var operationWrapper backoff.Operation
if success == nil {
return backoff.RetryNotify(operation, b, notify)
}
retries := 0
operationWrapper := func() error {
err := operation()
if err != nil {
retries++
} else if retries > 0 {
success(retries)
operationWrapper = operation
} else {
retries := 0
operationWrapper = func() error {
err := operation()
if err != nil {
retries++
} else if retries > 0 {
success(retries)
}
return err
}
return err
}
return backoff.RetryNotify(operationWrapper, b, notify)
err := backoff.RetryNotify(operationWrapper, b, notify)
if err != nil && notify != nil {
// log final error
notify(err, -1)
}
return err
}
func withRetryAtLeastOnce(delegate *backoff.ExponentialBackOff) *retryAtLeastOnce {
return &retryAtLeastOnce{delegate: delegate}
}
type retryAtLeastOnce struct {
delegate *backoff.ExponentialBackOff
numTries uint64
}
func (b *retryAtLeastOnce) NextBackOff() time.Duration {
delay := b.delegate.NextBackOff()
b.numTries++
if b.numTries == 1 && b.delegate.Stop == delay {
return b.delegate.InitialInterval
}
return delay
}
func (b *retryAtLeastOnce) Reset() {
b.numTries = 0
b.delegate.Reset()
}
var fastRetries = false
@ -69,13 +106,31 @@ func (be *Backend) retry(ctx context.Context, msg string, f func() error) error
}
bo := backoff.NewExponentialBackOff()
bo.MaxElapsedTime = be.MaxElapsedTime
bo.InitialInterval = 1 * time.Second
bo.Multiplier = 2
if fastRetries {
// speed up integration tests
bo.InitialInterval = 1 * time.Millisecond
maxElapsedTime := 200 * time.Millisecond
if bo.MaxElapsedTime > maxElapsedTime {
bo.MaxElapsedTime = maxElapsedTime
}
}
err := retryNotifyErrorWithSuccess(f,
backoff.WithContext(backoff.WithMaxRetries(bo, uint64(be.MaxTries)), ctx),
err := retryNotifyErrorWithSuccess(
func() error {
err := f()
// don't retry permanent errors as those very likely cannot be fixed by retrying
// TODO remove IsNotExist(err) special cases when removing the feature flag
if feature.Flag.Enabled(feature.BackendErrorRedesign) && !errors.Is(err, &backoff.PermanentError{}) && be.Backend.IsPermanentError(err) {
return backoff.Permanent(err)
}
return err
},
backoff.WithContext(withRetryAtLeastOnce(bo), ctx),
func(err error, d time.Duration) {
if be.Report != nil {
be.Report(msg, err, d)
@ -121,19 +176,39 @@ func (be *Backend) Save(ctx context.Context, h backend.Handle, rd backend.Rewind
})
}
// Failed loads expire after an hour
var failedLoadExpiry = time.Hour
// Load returns a reader that yields the contents of the file at h at the
// given offset. If length is larger than zero, only a portion of the file
// is returned. rd must be closed after use. If an error is returned, the
// ReadCloser must be nil.
func (be *Backend) Load(ctx context.Context, h backend.Handle, length int, offset int64, consumer func(rd io.Reader) error) (err error) {
return be.retry(ctx, fmt.Sprintf("Load(%v, %v, %v)", h, length, offset),
key := h
key.IsMetadata = false
// Implement the circuit breaker pattern for files that exhausted all retries due to a non-permanent error
if v, ok := be.failedLoads.Load(key); ok {
if time.Since(v.(time.Time)) > failedLoadExpiry {
be.failedLoads.Delete(key)
} else {
// fail immediately if the file was already problematic during the last hour
return fmt.Errorf("circuit breaker open for file %v", h)
}
}
err = be.retry(ctx, fmt.Sprintf("Load(%v, %v, %v)", h, length, offset),
func() error {
err := be.Backend.Load(ctx, h, length, offset, consumer)
if be.Backend.IsNotExist(err) {
return backoff.Permanent(err)
}
return err
return be.Backend.Load(ctx, h, length, offset, consumer)
})
if feature.Flag.Enabled(feature.BackendErrorRedesign) && err != nil && !be.IsPermanentError(err) {
// We've exhausted the retries, the file is likely inaccessible. By excluding permanent
// errors, not found or truncated files are not recorded.
be.failedLoads.LoadOrStore(key, time.Now())
}
return err
}
// Stat returns information about the File identified by h.

View File

@ -4,6 +4,7 @@ import (
"bytes"
"context"
"io"
"strings"
"testing"
"time"
@ -192,8 +193,9 @@ func TestBackendListRetryErrorBackend(t *testing.T) {
}
TestFastRetries(t)
const maxRetries = 2
retryBackend := New(be, maxRetries, nil, nil)
const maxElapsedTime = 10 * time.Millisecond
now := time.Now()
retryBackend := New(be, maxElapsedTime, nil, nil)
var listed []string
err := retryBackend.List(context.TODO(), backend.PackFile, func(fi backend.FileInfo) error {
@ -206,8 +208,9 @@ func TestBackendListRetryErrorBackend(t *testing.T) {
t.Fatalf("wrong error returned, want %v, got %v", ErrBackendTest, err)
}
if retries != maxRetries+1 {
t.Fatalf("List was called %d times, wanted %v", retries, maxRetries+1)
duration := time.Since(now)
if duration > 100*time.Millisecond {
t.Fatalf("list retries took %v, expected at most 10ms", duration)
}
test.Equals(t, names[:2], listed)
@ -289,7 +292,7 @@ func TestBackendLoadNotExists(t *testing.T) {
}
return nil, notFound
}
be.IsNotExistFn = func(err error) bool {
be.IsPermanentErrorFn = func(err error) bool {
return errors.Is(err, notFound)
}
@ -299,10 +302,61 @@ func TestBackendLoadNotExists(t *testing.T) {
err := retryBackend.Load(context.TODO(), backend.Handle{}, 0, 0, func(rd io.Reader) (err error) {
return nil
})
test.Assert(t, be.IsNotExistFn(err), "unexpected error %v", err)
test.Assert(t, be.IsPermanentErrorFn(err), "unexpected error %v", err)
test.Equals(t, 1, attempt)
}
func TestBackendLoadCircuitBreaker(t *testing.T) {
// retry should not retry if the error matches IsPermanentError
notFound := errors.New("not found")
otherError := errors.New("something")
attempt := 0
be := mock.NewBackend()
be.IsPermanentErrorFn = func(err error) bool {
return errors.Is(err, notFound)
}
be.OpenReaderFn = func(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
attempt++
return nil, otherError
}
nilRd := func(rd io.Reader) (err error) {
return nil
}
TestFastRetries(t)
retryBackend := New(be, 2, nil, nil)
// trip the circuit breaker for file "other"
err := retryBackend.Load(context.TODO(), backend.Handle{Name: "other"}, 0, 0, nilRd)
test.Equals(t, otherError, err, "unexpected error")
test.Equals(t, 2, attempt)
attempt = 0
err = retryBackend.Load(context.TODO(), backend.Handle{Name: "other"}, 0, 0, nilRd)
test.Assert(t, strings.Contains(err.Error(), "circuit breaker open for file"), "expected circuit breaker error, got %v")
test.Equals(t, 0, attempt)
// don't trip for permanent errors
be.OpenReaderFn = func(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
attempt++
return nil, notFound
}
err = retryBackend.Load(context.TODO(), backend.Handle{Name: "notfound"}, 0, 0, nilRd)
test.Equals(t, notFound, err, "expected circuit breaker to only affect other file, got %v")
err = retryBackend.Load(context.TODO(), backend.Handle{Name: "notfound"}, 0, 0, nilRd)
test.Equals(t, notFound, err, "persistent error must not trigger circuit breaker, got %v")
// wait for circuit breaker to expire
time.Sleep(5 * time.Millisecond)
old := failedLoadExpiry
defer func() {
failedLoadExpiry = old
}()
failedLoadExpiry = 3 * time.Millisecond
err = retryBackend.Load(context.TODO(), backend.Handle{Name: "other"}, 0, 0, nilRd)
test.Equals(t, notFound, err, "expected circuit breaker to reset, got %v")
}
func TestBackendStatNotExists(t *testing.T) {
// stat should not retry if the error matches IsNotExist
notFound := errors.New("not found")
@ -329,6 +383,36 @@ func TestBackendStatNotExists(t *testing.T) {
test.Equals(t, 1, attempt)
}
func TestBackendRetryPermanent(t *testing.T) {
// retry should not retry if the error matches IsPermanentError
notFound := errors.New("not found")
attempt := 0
be := mock.NewBackend()
be.IsPermanentErrorFn = func(err error) bool {
return errors.Is(err, notFound)
}
TestFastRetries(t)
retryBackend := New(be, 2, nil, nil)
err := retryBackend.retry(context.TODO(), "test", func() error {
attempt++
return notFound
})
test.Assert(t, be.IsPermanentErrorFn(err), "unexpected error %v", err)
test.Equals(t, 1, attempt)
attempt = 0
err = retryBackend.retry(context.TODO(), "test", func() error {
attempt++
return errors.New("something")
})
test.Assert(t, !be.IsPermanentErrorFn(err), "error unexpectedly considered permanent %v", err)
test.Equals(t, 2, attempt)
}
func assertIsCanceled(t *testing.T, err error) {
test.Assert(t, err == context.Canceled, "got unexpected err %v", err)
}

View File

@ -17,6 +17,7 @@ import (
"github.com/restic/restic/internal/backend/util"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/feature"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
@ -229,6 +230,21 @@ func (be *Backend) IsNotExist(err error) bool {
return errors.As(err, &e) && e.Code == "NoSuchKey"
}
func (be *Backend) IsPermanentError(err error) bool {
if be.IsNotExist(err) {
return true
}
var merr minio.ErrorResponse
if errors.As(err, &merr) {
if merr.Code == "InvalidRange" || merr.Code == "AccessDenied" {
return true
}
}
return false
}
// Join combines path components with slashes.
func (be *Backend) Join(p ...string) string {
return path.Join(p...)
@ -305,11 +321,6 @@ func (be *Backend) Connections() uint {
return be.cfg.Connections
}
// Location returns this backend's location (the bucket name).
func (be *Backend) Location() string {
return be.Join(be.cfg.Bucket, be.cfg.Prefix)
}
// Hasher may return a hash function for calculating a content hash for the backend
func (be *Backend) Hasher() hash.Hash {
return nil
@ -384,11 +395,18 @@ func (be *Backend) openReader(ctx context.Context, h backend.Handle, length int,
}
coreClient := minio.Core{Client: be.client}
rd, _, _, err := coreClient.GetObject(ctx, be.cfg.Bucket, objName, opts)
rd, info, _, err := coreClient.GetObject(ctx, be.cfg.Bucket, objName, opts)
if err != nil {
return nil, err
}
if feature.Flag.Enabled(feature.BackendErrorRedesign) && length > 0 {
if info.Size > 0 && info.Size != int64(length) {
_ = rd.Close()
return nil, minio.ErrorResponse{Code: "InvalidRange", Message: "restic-file-too-short"}
}
}
return rd, err
}

View File

@ -20,6 +20,7 @@ import (
"github.com/restic/restic/internal/backend/util"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/feature"
"github.com/cenkalti/backoff/v4"
"github.com/pkg/sftp"
@ -43,6 +44,8 @@ type SFTP struct {
var _ backend.Backend = &SFTP{}
var errTooShort = fmt.Errorf("file is too short")
func NewFactory() location.Factory {
return location.NewLimitedBackendFactory("sftp", ParseConfig, location.NoPassword, limiter.WrapBackendConstructor(Create), limiter.WrapBackendConstructor(Open))
}
@ -102,7 +105,12 @@ func startClient(cfg Config) (*SFTP, error) {
}()
// open the SFTP session
client, err := sftp.NewClientPipe(rd, wr)
client, err := sftp.NewClientPipe(rd, wr,
// write multiple packets (32kb) in parallel per file
// not strictly necessary as we use ReadFromWithConcurrency
sftp.UseConcurrentWrites(true),
// increase send buffer per file to 4MB
sftp.MaxConcurrentRequestsPerFile(128))
if err != nil {
return nil, errors.Errorf("unable to start the sftp session, error: %v", err)
}
@ -207,6 +215,10 @@ func (r *SFTP) IsNotExist(err error) bool {
return errors.Is(err, os.ErrNotExist)
}
func (r *SFTP) IsPermanentError(err error) bool {
return r.IsNotExist(err) || errors.Is(err, errTooShort) || errors.Is(err, os.ErrPermission)
}
func buildSSHCommand(cfg Config) (cmd string, args []string, err error) {
if cfg.Command != "" {
args, err := backend.SplitShellStrings(cfg.Command)
@ -280,11 +292,6 @@ func (r *SFTP) Connections() uint {
return r.Config.Connections
}
// Location returns this backend's location (the directory name).
func (r *SFTP) Location() string {
return r.p
}
// Hasher may return a hash function for calculating a content hash for the backend
func (r *SFTP) Hasher() hash.Hash {
return nil
@ -359,7 +366,7 @@ func (r *SFTP) Save(_ context.Context, h backend.Handle, rd backend.RewindReader
}()
// save data, make sure to use the optimized sftp upload method
wbytes, err := f.ReadFrom(rd)
wbytes, err := f.ReadFromWithConcurrency(rd, 0)
if err != nil {
_ = f.Close()
err = r.checkNoSpace(dirname, rd.Length(), err)
@ -414,7 +421,24 @@ func (r *SFTP) checkNoSpace(dir string, size int64, origErr error) error {
// Load runs fn with a reader that yields the contents of the file at h at the
// given offset.
func (r *SFTP) Load(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
return util.DefaultLoad(ctx, h, length, offset, r.openReader, fn)
return util.DefaultLoad(ctx, h, length, offset, r.openReader, func(rd io.Reader) error {
if length == 0 || !feature.Flag.Enabled(feature.BackendErrorRedesign) {
return fn(rd)
}
// there is no direct way to efficiently check whether the file is too short
// rd is already a LimitedReader which can be used to track the number of bytes read
err := fn(rd)
// check the underlying reader to be agnostic to however fn() handles the returned error
_, rderr := rd.Read([]byte{0})
if rderr == io.EOF && rd.(*util.LimitedReadCloser).N != 0 {
// file is too short
return fmt.Errorf("%w: %v", errTooShort, err)
}
return err
})
}
func (r *SFTP) openReader(_ context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
@ -434,7 +458,7 @@ func (r *SFTP) openReader(_ context.Context, h backend.Handle, length int, offse
if length > 0 {
// unlimited reads usually use io.Copy which needs WriteTo support at the underlying reader
// limited reads are usually combined with io.ReadFull which reads all required bytes into a buffer in one go
return backend.LimitReadCloser(f, int64(length)), nil
return util.LimitReadCloser(f, int64(length)), nil
}
return f, nil

View File

@ -19,6 +19,7 @@ import (
"github.com/restic/restic/internal/backend/util"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/feature"
"github.com/ncw/swift/v2"
)
@ -117,11 +118,6 @@ func (be *beSwift) Connections() uint {
return be.connections
}
// Location returns this backend's location (the container name).
func (be *beSwift) Location() string {
return be.container
}
// Hasher may return a hash function for calculating a content hash for the backend
func (be *beSwift) Hasher() hash.Hash {
return md5.New()
@ -153,7 +149,18 @@ func (be *beSwift) openReader(ctx context.Context, h backend.Handle, length int,
obj, _, err := be.conn.ObjectOpen(ctx, be.container, objName, false, headers)
if err != nil {
return nil, errors.Wrap(err, "conn.ObjectOpen")
return nil, fmt.Errorf("conn.ObjectOpen: %w", err)
}
if feature.Flag.Enabled(feature.BackendErrorRedesign) && length > 0 {
// get response length, but don't cause backend calls
cctx, cancel := context.WithCancel(context.Background())
cancel()
objLength, e := obj.Length(cctx)
if e == nil && objLength != int64(length) {
_ = obj.Close()
return nil, &swift.Error{StatusCode: http.StatusRequestedRangeNotSatisfiable, Text: "restic-file-too-short"}
}
}
return obj, nil
@ -242,6 +249,21 @@ func (be *beSwift) IsNotExist(err error) bool {
return errors.As(err, &e) && e.StatusCode == http.StatusNotFound
}
func (be *beSwift) IsPermanentError(err error) bool {
if be.IsNotExist(err) {
return true
}
var serr *swift.Error
if errors.As(err, &serr) {
if serr.StatusCode == http.StatusRequestedRangeNotSatisfiable || serr.StatusCode == http.StatusUnauthorized || serr.StatusCode == http.StatusForbidden {
return true
}
}
return false
}
// Delete removes all restic objects in the container.
// It will not remove the container itself.
func (be *beSwift) Delete(ctx context.Context) error {

View File

@ -36,6 +36,19 @@ func beTest(ctx context.Context, be backend.Backend, h backend.Handle) (bool, er
return err == nil, err
}
func LoadAll(ctx context.Context, be backend.Backend, h backend.Handle) ([]byte, error) {
var buf []byte
err := be.Load(ctx, h, 0, 0, func(rd io.Reader) error {
var err error
buf, err = io.ReadAll(rd)
return err
})
if err != nil {
return nil, err
}
return buf, nil
}
// TestStripPasswordCall tests that the StripPassword method of a factory can be called without crashing.
// It does not verify whether passwords are removed correctly
func (s *Suite[C]) TestStripPasswordCall(_ *testing.T) {
@ -75,17 +88,6 @@ func (s *Suite[C]) TestCreateWithConfig(t *testing.T) {
}
}
// TestLocation tests that a location string is returned.
func (s *Suite[C]) TestLocation(t *testing.T) {
b := s.open(t)
defer s.close(t, b)
l := b.Location()
if l == "" {
t.Fatalf("invalid location string %q", l)
}
}
// TestConfig saves and loads a config from the backend.
func (s *Suite[C]) TestConfig(t *testing.T) {
b := s.open(t)
@ -94,11 +96,12 @@ func (s *Suite[C]) TestConfig(t *testing.T) {
var testString = "Config"
// create config and read it back
_, err := backend.LoadAll(context.TODO(), nil, b, backend.Handle{Type: backend.ConfigFile})
_, err := LoadAll(context.TODO(), b, backend.Handle{Type: backend.ConfigFile})
if err == nil {
t.Fatalf("did not get expected error for non-existing config")
}
test.Assert(t, b.IsNotExist(err), "IsNotExist() did not recognize error from LoadAll(): %v", err)
test.Assert(t, b.IsPermanentError(err), "IsPermanentError() did not recognize error from LoadAll(): %v", err)
err = b.Save(context.TODO(), backend.Handle{Type: backend.ConfigFile}, backend.NewByteReader([]byte(testString), b.Hasher()))
if err != nil {
@ -109,7 +112,7 @@ func (s *Suite[C]) TestConfig(t *testing.T) {
// same config
for _, name := range []string{"", "foo", "bar", "0000000000000000000000000000000000000000000000000000000000000000"} {
h := backend.Handle{Type: backend.ConfigFile, Name: name}
buf, err := backend.LoadAll(context.TODO(), nil, b, h)
buf, err := LoadAll(context.TODO(), b, h)
if err != nil {
t.Fatalf("unable to read config with name %q: %+v", name, err)
}
@ -135,6 +138,7 @@ func (s *Suite[C]) TestLoad(t *testing.T) {
t.Fatalf("Load() did not return an error for non-existing blob")
}
test.Assert(t, b.IsNotExist(err), "IsNotExist() did not recognize non-existing blob: %v", err)
test.Assert(t, b.IsPermanentError(err), "IsPermanentError() did not recognize non-existing blob: %v", err)
length := rand.Intn(1<<24) + 2000
@ -181,8 +185,12 @@ func (s *Suite[C]) TestLoad(t *testing.T) {
}
getlen := l
if l >= len(d) && rand.Float32() >= 0.5 {
getlen = 0
if l >= len(d) {
if rand.Float32() >= 0.5 {
getlen = 0
} else {
getlen = len(d)
}
}
if l > 0 && l < len(d) {
@ -225,6 +233,18 @@ func (s *Suite[C]) TestLoad(t *testing.T) {
}
}
// test error checking for partial and fully out of bounds read
// only test for length > 0 as we currently do not need strict out of bounds handling for length==0
for _, offset := range []int{length - 99, length - 50, length, length + 100} {
err = b.Load(context.TODO(), handle, 100, int64(offset), func(rd io.Reader) (ierr error) {
_, ierr = io.ReadAll(rd)
return ierr
})
test.Assert(t, err != nil, "Load() did not return error on out of bounds read! o %v, l %v, filelength %v", offset, 100, length)
test.Assert(t, b.IsPermanentError(err), "IsPermanentError() did not recognize out of range read: %v", err)
test.Assert(t, !b.IsNotExist(err), "IsNotExist() must not recognize out of range read: %v", err)
}
test.OK(t, b.Remove(context.TODO(), handle))
}
@ -501,7 +521,7 @@ func (s *Suite[C]) TestSave(t *testing.T) {
err := b.Save(context.TODO(), h, backend.NewByteReader(data, b.Hasher()))
test.OK(t, err)
buf, err := backend.LoadAll(context.TODO(), nil, b, h)
buf, err := LoadAll(context.TODO(), b, h)
test.OK(t, err)
if len(buf) != len(data) {
t.Fatalf("number of bytes does not match, want %v, got %v", len(data), len(buf))
@ -762,6 +782,7 @@ func (s *Suite[C]) TestBackend(t *testing.T) {
defer s.close(t, b)
test.Assert(t, !b.IsNotExist(nil), "IsNotExist() recognized nil error")
test.Assert(t, !b.IsPermanentError(nil), "IsPermanentError() recognized nil error")
for _, tpe := range []backend.FileType{
backend.PackFile, backend.KeyFile, backend.LockFile,
@ -782,11 +803,13 @@ func (s *Suite[C]) TestBackend(t *testing.T) {
_, err = b.Stat(context.TODO(), h)
test.Assert(t, err != nil, "blob data could be extracted before creation")
test.Assert(t, b.IsNotExist(err), "IsNotExist() did not recognize Stat() error: %v", err)
test.Assert(t, b.IsPermanentError(err), "IsPermanentError() did not recognize Stat() error: %v", err)
// try to read not existing blob
err = testLoad(b, h)
test.Assert(t, err != nil, "blob could be read before creation")
test.Assert(t, b.IsNotExist(err), "IsNotExist() did not recognize Load() error: %v", err)
test.Assert(t, b.IsPermanentError(err), "IsPermanentError() did not recognize Load() error: %v", err)
// try to get string out, should fail
ret, err = beTest(context.TODO(), b, h)
@ -800,7 +823,7 @@ func (s *Suite[C]) TestBackend(t *testing.T) {
// test Load()
h := backend.Handle{Type: tpe, Name: ts.id}
buf, err := backend.LoadAll(context.TODO(), nil, b, h)
buf, err := LoadAll(context.TODO(), b, h)
test.OK(t, err)
test.Equals(t, ts.data, string(buf))

View File

@ -0,0 +1,15 @@
package util
import "io"
// LimitedReadCloser wraps io.LimitedReader and exposes the Close() method.
type LimitedReadCloser struct {
io.Closer
io.LimitedReader
}
// LimitReadCloser returns a new reader wraps r in an io.LimitedReader, but also
// exposes the Close() method.
func LimitReadCloser(r io.ReadCloser, n int64) *LimitedReadCloser {
return &LimitedReadCloser{Closer: r, LimitedReader: io.LimitedReader{R: r, N: n}}
}

View File

@ -1,76 +0,0 @@
package backend
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"io"
"github.com/minio/sha256-simd"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
)
func verifyContentMatchesName(s string, data []byte) (bool, error) {
if len(s) != hex.EncodedLen(sha256.Size) {
return false, fmt.Errorf("invalid length for ID: %q", s)
}
b, err := hex.DecodeString(s)
if err != nil {
return false, fmt.Errorf("invalid ID: %s", err)
}
var id [sha256.Size]byte
copy(id[:], b)
hashed := sha256.Sum256(data)
return id == hashed, nil
}
// LoadAll reads all data stored in the backend for the handle into the given
// buffer, which is truncated. If the buffer is not large enough or nil, a new
// one is allocated.
func LoadAll(ctx context.Context, buf []byte, be Backend, h Handle) ([]byte, error) {
retriedInvalidData := false
err := be.Load(ctx, h, 0, 0, func(rd io.Reader) error {
// make sure this is idempotent, in case an error occurs this function may be called multiple times!
wr := bytes.NewBuffer(buf[:0])
_, cerr := io.Copy(wr, rd)
if cerr != nil {
return cerr
}
buf = wr.Bytes()
// retry loading damaged data only once. If a file fails to download correctly
// the second time, then it is likely corrupted at the backend. Return the data
// to the caller in that case to let it decide what to do with the data.
if !retriedInvalidData && h.Type != ConfigFile {
if matches, err := verifyContentMatchesName(h.Name, buf); err == nil && !matches {
debug.Log("retry loading broken blob %v", h)
retriedInvalidData = true
return errors.Errorf("loadAll(%v): invalid data returned", h)
}
}
return nil
})
if err != nil {
return nil, err
}
return buf, nil
}
// LimitedReadCloser wraps io.LimitedReader and exposes the Close() method.
type LimitedReadCloser struct {
io.Closer
io.LimitedReader
}
// LimitReadCloser returns a new reader wraps r in an io.LimitedReader, but also
// exposes the Close() method.
func LimitReadCloser(r io.ReadCloser, n int64) *LimitedReadCloser {
return &LimitedReadCloser{Closer: r, LimitedReader: io.LimitedReader{R: r, N: n}}
}

View File

@ -1,149 +0,0 @@
package backend_test
import (
"bytes"
"context"
"io"
"math/rand"
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/backend/mem"
"github.com/restic/restic/internal/backend/mock"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
const KiB = 1 << 10
const MiB = 1 << 20
func TestLoadAll(t *testing.T) {
b := mem.New()
var buf []byte
for i := 0; i < 20; i++ {
data := rtest.Random(23+i, rand.Intn(MiB)+500*KiB)
id := restic.Hash(data)
h := backend.Handle{Name: id.String(), Type: backend.PackFile}
err := b.Save(context.TODO(), h, backend.NewByteReader(data, b.Hasher()))
rtest.OK(t, err)
buf, err := backend.LoadAll(context.TODO(), buf, b, backend.Handle{Type: backend.PackFile, Name: id.String()})
rtest.OK(t, err)
if len(buf) != len(data) {
t.Errorf("length of returned buffer does not match, want %d, got %d", len(data), len(buf))
continue
}
if !bytes.Equal(buf, data) {
t.Errorf("wrong data returned")
continue
}
}
}
func save(t testing.TB, be backend.Backend, buf []byte) backend.Handle {
id := restic.Hash(buf)
h := backend.Handle{Name: id.String(), Type: backend.PackFile}
err := be.Save(context.TODO(), h, backend.NewByteReader(buf, be.Hasher()))
if err != nil {
t.Fatal(err)
}
return h
}
type quickRetryBackend struct {
backend.Backend
}
func (be *quickRetryBackend) Load(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
err := be.Backend.Load(ctx, h, length, offset, fn)
if err != nil {
// retry
err = be.Backend.Load(ctx, h, length, offset, fn)
}
return err
}
func TestLoadAllBroken(t *testing.T) {
b := mock.NewBackend()
data := rtest.Random(23, rand.Intn(MiB)+500*KiB)
id := restic.Hash(data)
// damage buffer
data[0] ^= 0xff
b.OpenReaderFn = func(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
return io.NopCloser(bytes.NewReader(data)), nil
}
// must fail on first try
_, err := backend.LoadAll(context.TODO(), nil, b, backend.Handle{Type: backend.PackFile, Name: id.String()})
if err == nil {
t.Fatalf("missing expected error")
}
// must return the broken data after a retry
be := &quickRetryBackend{Backend: b}
buf, err := backend.LoadAll(context.TODO(), nil, be, backend.Handle{Type: backend.PackFile, Name: id.String()})
rtest.OK(t, err)
if !bytes.Equal(buf, data) {
t.Fatalf("wrong data returned")
}
}
func TestLoadAllAppend(t *testing.T) {
b := mem.New()
h1 := save(t, b, []byte("foobar test string"))
randomData := rtest.Random(23, rand.Intn(MiB)+500*KiB)
h2 := save(t, b, randomData)
var tests = []struct {
handle backend.Handle
buf []byte
want []byte
}{
{
handle: h1,
buf: nil,
want: []byte("foobar test string"),
},
{
handle: h1,
buf: []byte("xxx"),
want: []byte("foobar test string"),
},
{
handle: h2,
buf: nil,
want: randomData,
},
{
handle: h2,
buf: make([]byte, 0, 200),
want: randomData,
},
{
handle: h2,
buf: []byte("foobarbaz"),
want: randomData,
},
}
for _, test := range tests {
t.Run("", func(t *testing.T) {
buf, err := backend.LoadAll(context.TODO(), test.buf, b, test.handle)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(buf, test.want) {
t.Errorf("wrong data returned, want %q, got %q", test.want, buf)
}
})
}
}

View File

@ -0,0 +1,104 @@
package backend
import (
"context"
"io"
"net/http"
"time"
)
// watchdogRoundtripper cancels an http request if an upload or download did not make progress
// within timeout. The time between fully sending the request and receiving an response is also
// limited by this timeout. This ensures that stuck requests are cancelled after some time.
//
// The roundtriper makes the assumption that the upload and download happen continuously. In particular,
// the caller must not make long pauses between individual read requests from the response body.
type watchdogRoundtripper struct {
rt http.RoundTripper
timeout time.Duration
chunkSize int
}
var _ http.RoundTripper = &watchdogRoundtripper{}
func newWatchdogRoundtripper(rt http.RoundTripper, timeout time.Duration, chunkSize int) *watchdogRoundtripper {
return &watchdogRoundtripper{
rt: rt,
timeout: timeout,
chunkSize: chunkSize,
}
}
func (w *watchdogRoundtripper) RoundTrip(req *http.Request) (*http.Response, error) {
timer := time.NewTimer(w.timeout)
ctx, cancel := context.WithCancel(req.Context())
// cancel context if timer expires
go func() {
defer timer.Stop()
select {
case <-timer.C:
cancel()
case <-ctx.Done():
}
}()
kick := func() {
timer.Reset(w.timeout)
}
req = req.Clone(ctx)
if req.Body != nil {
// kick watchdog timer as long as uploading makes progress
req.Body = newWatchdogReadCloser(req.Body, w.chunkSize, kick, nil)
}
resp, err := w.rt.RoundTrip(req)
if err != nil {
return nil, err
}
// kick watchdog timer as long as downloading makes progress
// cancel context to stop goroutine once response body is closed
resp.Body = newWatchdogReadCloser(resp.Body, w.chunkSize, kick, cancel)
return resp, nil
}
func newWatchdogReadCloser(rc io.ReadCloser, chunkSize int, kick func(), close func()) *watchdogReadCloser {
return &watchdogReadCloser{
rc: rc,
chunkSize: chunkSize,
kick: kick,
close: close,
}
}
type watchdogReadCloser struct {
rc io.ReadCloser
chunkSize int
kick func()
close func()
}
var _ io.ReadCloser = &watchdogReadCloser{}
func (w *watchdogReadCloser) Read(p []byte) (n int, err error) {
w.kick()
// Read is not required to fill the whole passed in byte slice
// Thus, keep things simple and just stay within our chunkSize.
if len(p) > w.chunkSize {
p = p[:w.chunkSize]
}
n, err = w.rc.Read(p)
w.kick()
return n, err
}
func (w *watchdogReadCloser) Close() error {
if w.close != nil {
w.close()
}
return w.rc.Close()
}

View File

@ -0,0 +1,201 @@
package backend
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"net/http/httptest"
"testing"
"time"
rtest "github.com/restic/restic/internal/test"
)
func TestRead(t *testing.T) {
data := []byte("abcdef")
var ctr int
kick := func() {
ctr++
}
var closed bool
onClose := func() {
closed = true
}
wd := newWatchdogReadCloser(io.NopCloser(bytes.NewReader(data)), 1, kick, onClose)
out, err := io.ReadAll(wd)
rtest.OK(t, err)
rtest.Equals(t, data, out, "data mismatch")
// the EOF read also triggers the kick function
rtest.Equals(t, len(data)*2+2, ctr, "unexpected number of kick calls")
rtest.Equals(t, false, closed, "close function called too early")
rtest.OK(t, wd.Close())
rtest.Equals(t, true, closed, "close function not called")
}
func TestRoundtrip(t *testing.T) {
t.Parallel()
// at the higher delay values, it takes longer to transmit the request/response body
// than the roundTripper timeout
for _, delay := range []int{0, 1, 10, 20} {
t.Run(fmt.Sprintf("%v", delay), func(t *testing.T) {
msg := []byte("ping-pong-data")
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
data, err := io.ReadAll(r.Body)
if err != nil {
w.WriteHeader(500)
return
}
w.WriteHeader(200)
// slowly send the reply
for len(data) >= 2 {
_, _ = w.Write(data[:2])
w.(http.Flusher).Flush()
data = data[2:]
time.Sleep(time.Duration(delay) * time.Millisecond)
}
_, _ = w.Write(data)
}))
defer srv.Close()
rt := newWatchdogRoundtripper(http.DefaultTransport, 50*time.Millisecond, 2)
req, err := http.NewRequestWithContext(context.TODO(), "GET", srv.URL, io.NopCloser(newSlowReader(bytes.NewReader(msg), time.Duration(delay)*time.Millisecond)))
rtest.OK(t, err)
resp, err := rt.RoundTrip(req)
rtest.OK(t, err)
rtest.Equals(t, 200, resp.StatusCode, "unexpected status code")
response, err := io.ReadAll(resp.Body)
rtest.OK(t, err)
rtest.Equals(t, msg, response, "unexpected response")
rtest.OK(t, resp.Body.Close())
})
}
}
func TestCanceledRoundtrip(t *testing.T) {
rt := newWatchdogRoundtripper(http.DefaultTransport, time.Second, 2)
ctx, cancel := context.WithCancel(context.Background())
cancel()
req, err := http.NewRequestWithContext(ctx, "GET", "http://some.random.url.dfdgsfg", nil)
rtest.OK(t, err)
resp, err := rt.RoundTrip(req)
rtest.Equals(t, context.Canceled, err)
// make linter happy
if resp != nil {
rtest.OK(t, resp.Body.Close())
}
}
type slowReader struct {
data io.Reader
delay time.Duration
}
func newSlowReader(data io.Reader, delay time.Duration) *slowReader {
return &slowReader{
data: data,
delay: delay,
}
}
func (s *slowReader) Read(p []byte) (n int, err error) {
time.Sleep(s.delay)
return s.data.Read(p)
}
func TestUploadTimeout(t *testing.T) {
t.Parallel()
msg := []byte("ping")
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := io.ReadAll(r.Body)
if err != nil {
w.WriteHeader(500)
return
}
t.Error("upload should have been canceled")
}))
defer srv.Close()
rt := newWatchdogRoundtripper(http.DefaultTransport, 10*time.Millisecond, 1024)
req, err := http.NewRequestWithContext(context.TODO(), "GET", srv.URL, io.NopCloser(newSlowReader(bytes.NewReader(msg), 100*time.Millisecond)))
rtest.OK(t, err)
resp, err := rt.RoundTrip(req)
rtest.Equals(t, context.Canceled, err)
// make linter happy
if resp != nil {
rtest.OK(t, resp.Body.Close())
}
}
func TestProcessingTimeout(t *testing.T) {
t.Parallel()
msg := []byte("ping")
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := io.ReadAll(r.Body)
if err != nil {
w.WriteHeader(500)
return
}
time.Sleep(100 * time.Millisecond)
w.WriteHeader(200)
}))
defer srv.Close()
rt := newWatchdogRoundtripper(http.DefaultTransport, 10*time.Millisecond, 1024)
req, err := http.NewRequestWithContext(context.TODO(), "GET", srv.URL, io.NopCloser(bytes.NewReader(msg)))
rtest.OK(t, err)
resp, err := rt.RoundTrip(req)
rtest.Equals(t, context.Canceled, err)
// make linter happy
if resp != nil {
rtest.OK(t, resp.Body.Close())
}
}
func TestDownloadTimeout(t *testing.T) {
t.Parallel()
msg := []byte("ping")
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
data, err := io.ReadAll(r.Body)
if err != nil {
w.WriteHeader(500)
return
}
w.WriteHeader(200)
_, _ = w.Write(data[:2])
w.(http.Flusher).Flush()
data = data[2:]
time.Sleep(100 * time.Millisecond)
_, _ = w.Write(data)
}))
defer srv.Close()
rt := newWatchdogRoundtripper(http.DefaultTransport, 10*time.Millisecond, 1024)
req, err := http.NewRequestWithContext(context.TODO(), "GET", srv.URL, io.NopCloser(bytes.NewReader(msg)))
rtest.OK(t, err)
resp, err := rt.RoundTrip(req)
rtest.OK(t, err)
rtest.Equals(t, 200, resp.StatusCode, "unexpected status code")
_, err = io.ReadAll(resp.Body)
rtest.Equals(t, context.Canceled, err, "response download not canceled")
rtest.OK(t, resp.Body.Close())
}

View File

@ -20,13 +20,15 @@ type Cache struct {
c *simplelru.LRU[restic.ID, []byte]
free, size int // Current and max capacity, in bytes.
inProgress map[restic.ID]chan struct{}
}
// New constructs a blob cache that stores at most size bytes worth of blobs.
func New(size int) *Cache {
c := &Cache{
free: size,
size: size,
free: size,
size: size,
inProgress: make(map[restic.ID]chan struct{}),
}
// NewLRU wants us to specify some max. number of entries, else it errors.
@ -85,6 +87,48 @@ func (c *Cache) Get(id restic.ID) ([]byte, bool) {
return blob, ok
}
func (c *Cache) GetOrCompute(id restic.ID, compute func() ([]byte, error)) ([]byte, error) {
// check if already cached
blob, ok := c.Get(id)
if ok {
return blob, nil
}
// check for parallel download or start our own
finish := make(chan struct{})
c.mu.Lock()
waitForResult, isDownloading := c.inProgress[id]
if !isDownloading {
c.inProgress[id] = finish
// remove progress channel once finished here
defer func() {
c.mu.Lock()
delete(c.inProgress, id)
c.mu.Unlock()
close(finish)
}()
}
c.mu.Unlock()
if isDownloading {
// wait for result of parallel download
<-waitForResult
blob, ok := c.Get(id)
if ok {
return blob, nil
}
}
// download it
blob, err := compute()
if err == nil {
c.Add(id, blob)
}
return blob, err
}
func (c *Cache) evict(key restic.ID, blob []byte) {
debug.Log("bloblru.Cache: evict %v, %d bytes", key, cap(blob))
c.free += cap(blob) + overhead

View File

@ -1,11 +1,14 @@
package bloblru
import (
"context"
"fmt"
"math/rand"
"testing"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"golang.org/x/sync/errgroup"
)
func TestCache(t *testing.T) {
@ -52,6 +55,70 @@ func TestCache(t *testing.T) {
rtest.Equals(t, cacheSize, c.free)
}
func TestCacheGetOrCompute(t *testing.T) {
var id1, id2 restic.ID
id1[0] = 1
id2[0] = 2
const (
kiB = 1 << 10
cacheSize = 64*kiB + 3*overhead
)
c := New(cacheSize)
e := fmt.Errorf("broken")
_, err := c.GetOrCompute(id1, func() ([]byte, error) {
return nil, e
})
rtest.Equals(t, e, err, "expected error was not returned")
// fill buffer
data1 := make([]byte, 10*kiB)
blob, err := c.GetOrCompute(id1, func() ([]byte, error) {
return data1, nil
})
rtest.OK(t, err)
rtest.Equals(t, &data1[0], &blob[0], "wrong buffer returend")
// now the buffer should be returned without calling the compute function
blob, err = c.GetOrCompute(id1, func() ([]byte, error) {
return nil, e
})
rtest.OK(t, err)
rtest.Equals(t, &data1[0], &blob[0], "wrong buffer returend")
// check concurrency
wg, _ := errgroup.WithContext(context.TODO())
wait := make(chan struct{})
calls := make(chan struct{}, 10)
// start a bunch of blocking goroutines
for i := 0; i < 10; i++ {
wg.Go(func() error {
buf, err := c.GetOrCompute(id2, func() ([]byte, error) {
// block to ensure that multiple requests are waiting in parallel
<-wait
calls <- struct{}{}
return make([]byte, 42), nil
})
if len(buf) != 42 {
return fmt.Errorf("wrong buffer")
}
return err
})
}
close(wait)
rtest.OK(t, wg.Wait())
close(calls)
count := 0
for range calls {
count++
}
rtest.Equals(t, 1, count, "expected exactly one call of the compute function")
}
func BenchmarkAdd(b *testing.B) {
const (
MiB = 1 << 20

View File

@ -40,7 +40,8 @@ func (b *Backend) Remove(ctx context.Context, h backend.Handle) error {
return err
}
return b.Cache.remove(h)
_, err = b.Cache.remove(h)
return err
}
func autoCacheTypes(h backend.Handle) bool {
@ -79,10 +80,9 @@ func (b *Backend) Save(ctx context.Context, h backend.Handle, rd backend.RewindR
return err
}
err = b.Cache.Save(h, rd)
err = b.Cache.save(h, rd)
if err != nil {
debug.Log("unable to save %v to cache: %v", h, err)
_ = b.Cache.remove(h)
return err
}
@ -120,11 +120,11 @@ func (b *Backend) cacheFile(ctx context.Context, h backend.Handle) error {
if !b.Cache.Has(h) {
// nope, it's still not in the cache, pull it from the repo and save it
err := b.Backend.Load(ctx, h, 0, 0, func(rd io.Reader) error {
return b.Cache.Save(h, rd)
return b.Cache.save(h, rd)
})
if err != nil {
// try to remove from the cache, ignore errors
_ = b.Cache.remove(h)
_, _ = b.Cache.remove(h)
}
return err
}
@ -134,9 +134,9 @@ func (b *Backend) cacheFile(ctx context.Context, h backend.Handle) error {
// loadFromCache will try to load the file from the cache.
func (b *Backend) loadFromCache(h backend.Handle, length int, offset int64, consumer func(rd io.Reader) error) (bool, error) {
rd, err := b.Cache.load(h, length, offset)
rd, inCache, err := b.Cache.load(h, length, offset)
if err != nil {
return false, err
return inCache, err
}
err = consumer(rd)
@ -162,14 +162,10 @@ func (b *Backend) Load(ctx context.Context, h backend.Handle, length int, offset
// try loading from cache without checking that the handle is actually cached
inCache, err := b.loadFromCache(h, length, offset, consumer)
if inCache {
if err == nil {
return nil
}
// drop from cache and retry once
_ = b.Cache.remove(h)
debug.Log("error loading %v from cache: %v", h, err)
// the caller must explicitly use cache.Forget() to remove the cache entry
return err
}
debug.Log("error loading %v from cache: %v", h, err)
// if we don't automatically cache this file type, fall back to the backend
if !autoCacheTypes(h) {
@ -185,6 +181,9 @@ func (b *Backend) Load(ctx context.Context, h backend.Handle, length int, offset
inCache, err = b.loadFromCache(h, length, offset, consumer)
if inCache {
if err != nil {
debug.Log("error loading %v from cache: %v", h, err)
}
return err
}
@ -198,13 +197,9 @@ func (b *Backend) Stat(ctx context.Context, h backend.Handle) (backend.FileInfo,
debug.Log("cache Stat(%v)", h)
fi, err := b.Backend.Stat(ctx, h)
if err != nil {
if b.Backend.IsNotExist(err) {
// try to remove from the cache, ignore errors
_ = b.Cache.remove(h)
}
return fi, err
if err != nil && b.Backend.IsNotExist(err) {
// try to remove from the cache, ignore errors
_, _ = b.Cache.remove(h)
}
return fi, err

View File

@ -5,6 +5,7 @@ import (
"context"
"io"
"math/rand"
"strings"
"sync"
"testing"
"time"
@ -12,12 +13,13 @@ import (
"github.com/pkg/errors"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/backend/mem"
backendtest "github.com/restic/restic/internal/backend/test"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/test"
)
func loadAndCompare(t testing.TB, be backend.Backend, h backend.Handle, data []byte) {
buf, err := backend.LoadAll(context.TODO(), nil, be, h)
buf, err := backendtest.LoadAll(context.TODO(), be, h)
if err != nil {
t.Fatal(err)
}
@ -90,7 +92,7 @@ func TestBackend(t *testing.T) {
loadAndCompare(t, be, h, data)
// load data via cache
loadAndCompare(t, be, h, data)
loadAndCompare(t, wbe, h, data)
// remove directly
remove(t, be, h)
@ -113,6 +115,77 @@ func TestBackend(t *testing.T) {
}
}
type loadCountingBackend struct {
backend.Backend
ctr int
}
func (l *loadCountingBackend) Load(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
l.ctr++
return l.Backend.Load(ctx, h, length, offset, fn)
}
func TestOutOfBoundsAccess(t *testing.T) {
be := &loadCountingBackend{Backend: mem.New()}
c := TestNewCache(t)
wbe := c.Wrap(be)
h, data := randomData(50)
save(t, be, h, data)
// load out of bounds
err := wbe.Load(context.TODO(), h, 100, 100, func(rd io.Reader) error {
t.Error("cache returned non-existant file section")
return errors.New("broken")
})
test.Assert(t, strings.Contains(err.Error(), " is too short"), "expected too short error, got %v", err)
test.Equals(t, 1, be.ctr, "expected file to be loaded only once")
// file must nevertheless get cached
if !c.Has(h) {
t.Errorf("cache doesn't have file after load")
}
// start within bounds, but request too large chunk
err = wbe.Load(context.TODO(), h, 100, 0, func(rd io.Reader) error {
t.Error("cache returned non-existant file section")
return errors.New("broken")
})
test.Assert(t, strings.Contains(err.Error(), " is too short"), "expected too short error, got %v", err)
test.Equals(t, 1, be.ctr, "expected file to be loaded only once")
}
func TestForget(t *testing.T) {
be := &loadCountingBackend{Backend: mem.New()}
c := TestNewCache(t)
wbe := c.Wrap(be)
h, data := randomData(50)
save(t, be, h, data)
loadAndCompare(t, wbe, h, data)
test.Equals(t, 1, be.ctr, "expected file to be loaded once")
// must still exist even if load returns an error
exp := errors.New("error")
err := wbe.Load(context.TODO(), h, 0, 0, func(rd io.Reader) error {
return exp
})
test.Equals(t, exp, err, "wrong error")
test.Assert(t, c.Has(h), "missing cache entry")
test.OK(t, c.Forget(h))
test.Assert(t, !c.Has(h), "cache entry should have been removed")
// cache it again
loadAndCompare(t, wbe, h, data)
test.Assert(t, c.Has(h), "missing cache entry")
// forget must delete file only once
err = c.Forget(h)
test.Assert(t, strings.Contains(err.Error(), "circuit breaker prevents repeated deletion of cached file"), "wrong error message %q", err)
test.Assert(t, c.Has(h), "cache entry should still exist")
}
type loadErrorBackend struct {
backend.Backend
loadError error
@ -140,7 +213,7 @@ func TestErrorBackend(t *testing.T) {
loadTest := func(wg *sync.WaitGroup, be backend.Backend) {
defer wg.Done()
buf, err := backend.LoadAll(context.TODO(), nil, be, h)
buf, err := backendtest.LoadAll(context.TODO(), be, h)
if err == testErr {
return
}
@ -165,38 +238,3 @@ func TestErrorBackend(t *testing.T) {
wg.Wait()
}
func TestBackendRemoveBroken(t *testing.T) {
be := mem.New()
c := TestNewCache(t)
h, data := randomData(5234142)
// save directly in backend
save(t, be, h, data)
// prime cache with broken copy
broken := append([]byte{}, data...)
broken[0] ^= 0xff
err := c.Save(h, bytes.NewReader(broken))
test.OK(t, err)
// loadall retries if broken data was returned
buf, err := backend.LoadAll(context.TODO(), nil, c.Wrap(be), h)
test.OK(t, err)
if !bytes.Equal(buf, data) {
t.Fatalf("wrong data returned")
}
// check that the cache now contains the correct data
rd, err := c.load(h, 0, 0)
defer func() {
_ = rd.Close()
}()
test.OK(t, err)
cached, err := io.ReadAll(rd)
test.OK(t, err)
if !bytes.Equal(cached, data) {
t.Fatalf("wrong data cache")
}
}

View File

@ -6,6 +6,7 @@ import (
"path/filepath"
"regexp"
"strconv"
"sync"
"time"
"github.com/pkg/errors"
@ -20,6 +21,8 @@ type Cache struct {
path string
Base string
Created bool
forgotten sync.Map
}
const dirMode = 0700

View File

@ -1,6 +1,7 @@
package cache
import (
"fmt"
"io"
"os"
"path/filepath"
@ -8,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/backend/util"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/fs"
@ -31,54 +33,54 @@ func (c *Cache) canBeCached(t backend.FileType) bool {
return ok
}
// Load returns a reader that yields the contents of the file with the
// load returns a reader that yields the contents of the file with the
// given handle. rd must be closed after use. If an error is returned, the
// ReadCloser is nil.
func (c *Cache) load(h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
// ReadCloser is nil. The bool return value indicates whether the requested
// file exists in the cache. It can be true even when no reader is returned
// because length or offset are out of bounds
func (c *Cache) load(h backend.Handle, length int, offset int64) (io.ReadCloser, bool, error) {
debug.Log("Load(%v, %v, %v) from cache", h, length, offset)
if !c.canBeCached(h.Type) {
return nil, errors.New("cannot be cached")
return nil, false, errors.New("cannot be cached")
}
f, err := fs.Open(c.filename(h))
if err != nil {
return nil, errors.WithStack(err)
return nil, false, errors.WithStack(err)
}
fi, err := f.Stat()
if err != nil {
_ = f.Close()
return nil, errors.WithStack(err)
return nil, true, errors.WithStack(err)
}
size := fi.Size()
if size <= int64(crypto.CiphertextLength(0)) {
_ = f.Close()
_ = c.remove(h)
return nil, errors.Errorf("cached file %v is truncated, removing", h)
return nil, true, errors.Errorf("cached file %v is truncated", h)
}
if size < offset+int64(length) {
_ = f.Close()
_ = c.remove(h)
return nil, errors.Errorf("cached file %v is too small, removing", h)
return nil, true, errors.Errorf("cached file %v is too short", h)
}
if offset > 0 {
if _, err = f.Seek(offset, io.SeekStart); err != nil {
_ = f.Close()
return nil, err
return nil, true, err
}
}
if length <= 0 {
return f, nil
return f, true, nil
}
return backend.LimitReadCloser(f, int64(length)), nil
return util.LimitReadCloser(f, int64(length)), true, nil
}
// Save saves a file in the cache.
func (c *Cache) Save(h backend.Handle, rd io.Reader) error {
// save saves a file in the cache.
func (c *Cache) save(h backend.Handle, rd io.Reader) error {
debug.Log("Save to cache: %v", h)
if rd == nil {
return errors.New("Save() called with nil reader")
@ -138,13 +140,34 @@ func (c *Cache) Save(h backend.Handle, rd io.Reader) error {
return errors.WithStack(err)
}
// Remove deletes a file. When the file is not cache, no error is returned.
func (c *Cache) remove(h backend.Handle) error {
if !c.Has(h) {
return nil
func (c *Cache) Forget(h backend.Handle) error {
h.IsMetadata = false
if _, ok := c.forgotten.Load(h); ok {
// Delete a file at most once while restic runs.
// This prevents repeatedly caching and forgetting broken files
return fmt.Errorf("circuit breaker prevents repeated deletion of cached file %v", h)
}
return fs.Remove(c.filename(h))
removed, err := c.remove(h)
if removed {
c.forgotten.Store(h, struct{}{})
}
return err
}
// remove deletes a file. When the file is not cached, no error is returned.
func (c *Cache) remove(h backend.Handle) (bool, error) {
if !c.canBeCached(h.Type) {
return false, nil
}
err := fs.Remove(c.filename(h))
removed := err == nil
if errors.Is(err, os.ErrNotExist) {
err = nil
}
return removed, err
}
// Clear removes all files of type t from the cache that are not contained in

View File

@ -14,7 +14,7 @@ import (
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/test"
rtest "github.com/restic/restic/internal/test"
"golang.org/x/sync/errgroup"
)
@ -22,7 +22,7 @@ import (
func generateRandomFiles(t testing.TB, tpe backend.FileType, c *Cache) restic.IDSet {
ids := restic.NewIDSet()
for i := 0; i < rand.Intn(15)+10; i++ {
buf := test.Random(rand.Int(), 1<<19)
buf := rtest.Random(rand.Int(), 1<<19)
id := restic.Hash(buf)
h := backend.Handle{Type: tpe, Name: id.String()}
@ -30,7 +30,7 @@ func generateRandomFiles(t testing.TB, tpe backend.FileType, c *Cache) restic.ID
t.Errorf("index %v present before save", id)
}
err := c.Save(h, bytes.NewReader(buf))
err := c.save(h, bytes.NewReader(buf))
if err != nil {
t.Fatal(err)
}
@ -48,10 +48,11 @@ func randomID(s restic.IDSet) restic.ID {
}
func load(t testing.TB, c *Cache, h backend.Handle) []byte {
rd, err := c.load(h, 0, 0)
rd, inCache, err := c.load(h, 0, 0)
if err != nil {
t.Fatal(err)
}
rtest.Equals(t, true, inCache, "expected inCache flag to be true")
if rd == nil {
t.Fatalf("load() returned nil reader")
@ -144,14 +145,14 @@ func TestFileLoad(t *testing.T) {
c := TestNewCache(t)
// save about 5 MiB of data in the cache
data := test.Random(rand.Int(), 5234142)
data := rtest.Random(rand.Int(), 5234142)
id := restic.ID{}
copy(id[:], data)
h := backend.Handle{
Type: restic.PackFile,
Name: id.String(),
}
if err := c.Save(h, bytes.NewReader(data)); err != nil {
if err := c.save(h, bytes.NewReader(data)); err != nil {
t.Fatalf("Save() returned error: %v", err)
}
@ -169,10 +170,11 @@ func TestFileLoad(t *testing.T) {
for _, test := range tests {
t.Run(fmt.Sprintf("%v/%v", test.length, test.offset), func(t *testing.T) {
rd, err := c.load(h, test.length, test.offset)
rd, inCache, err := c.load(h, test.length, test.offset)
if err != nil {
t.Fatal(err)
}
rtest.Equals(t, true, inCache, "expected inCache flag to be true")
buf, err := io.ReadAll(rd)
if err != nil {
@ -225,7 +227,7 @@ func TestFileSaveConcurrent(t *testing.T) {
var (
c = TestNewCache(t)
data = test.Random(1, 10000)
data = rtest.Random(1, 10000)
g errgroup.Group
id restic.ID
)
@ -237,7 +239,7 @@ func TestFileSaveConcurrent(t *testing.T) {
}
for i := 0; i < nproc/2; i++ {
g.Go(func() error { return c.Save(h, bytes.NewReader(data)) })
g.Go(func() error { return c.save(h, bytes.NewReader(data)) })
// Can't use load because only the main goroutine may call t.Fatal.
g.Go(func() error {
@ -245,7 +247,7 @@ func TestFileSaveConcurrent(t *testing.T) {
// ensure is ENOENT or nil error.
time.Sleep(time.Duration(100+rand.Intn(200)) * time.Millisecond)
f, err := c.load(h, 0, 0)
f, _, err := c.load(h, 0, 0)
t.Logf("Load error: %v", err)
switch {
case err == nil:
@ -264,23 +266,23 @@ func TestFileSaveConcurrent(t *testing.T) {
})
}
test.OK(t, g.Wait())
rtest.OK(t, g.Wait())
saved := load(t, c, h)
test.Equals(t, data, saved)
rtest.Equals(t, data, saved)
}
func TestFileSaveAfterDamage(t *testing.T) {
c := TestNewCache(t)
test.OK(t, fs.RemoveAll(c.path))
rtest.OK(t, fs.RemoveAll(c.path))
// save a few bytes of data in the cache
data := test.Random(123456789, 42)
data := rtest.Random(123456789, 42)
id := restic.Hash(data)
h := backend.Handle{
Type: restic.PackFile,
Name: id.String(),
}
if err := c.Save(h, bytes.NewReader(data)); err == nil {
if err := c.save(h, bytes.NewReader(data)); err == nil {
t.Fatal("Missing error when saving to deleted cache directory")
}
}

View File

@ -2,21 +2,16 @@ package checker
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"runtime"
"sort"
"sync"
"github.com/klauspost/compress/zstd"
"github.com/minio/sha256-simd"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/backend/s3"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/hashing"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/pack"
"github.com/restic/restic/internal/repository"
@ -90,16 +85,6 @@ func (err *ErrOldIndexFormat) Error() string {
return fmt.Sprintf("index %v has old format", err.ID)
}
// ErrPackData is returned if errors are discovered while verifying a packfile
type ErrPackData struct {
PackID restic.ID
errs []error
}
func (e *ErrPackData) Error() string {
return fmt.Sprintf("pack %v contains %v errors: %v", e.PackID, len(e.errs), e.errs)
}
func (c *Checker) LoadSnapshots(ctx context.Context) error {
var err error
c.snapshots, err = restic.MemorizeList(ctx, c.repo, restic.SnapshotFile)
@ -256,8 +241,10 @@ func isS3Legacy(b backend.Backend) bool {
func (c *Checker) Packs(ctx context.Context, errChan chan<- error) {
defer close(errChan)
if isS3Legacy(c.repo.Backend()) {
errChan <- ErrLegacyLayout
if r, ok := c.repo.(*repository.Repository); ok {
if isS3Legacy(repository.AsS3Backend(r)) {
errChan <- ErrLegacyLayout
}
}
debug.Log("checking for %d packs", len(c.packs))
@ -522,142 +509,13 @@ func (c *Checker) GetPacks() map[restic.ID]int64 {
return c.packs
}
type partialReadError struct {
err error
}
func (e *partialReadError) Error() string {
return e.err.Error()
}
// checkPack reads a pack and checks the integrity of all blobs.
func checkPack(ctx context.Context, r restic.Repository, id restic.ID, blobs []restic.Blob, size int64, bufRd *bufio.Reader, dec *zstd.Decoder) error {
debug.Log("checking pack %v", id.String())
if len(blobs) == 0 {
return &ErrPackData{PackID: id, errs: []error{errors.New("pack is empty or not indexed")}}
}
// sanity check blobs in index
sort.Slice(blobs, func(i, j int) bool {
return blobs[i].Offset < blobs[j].Offset
})
idxHdrSize := pack.CalculateHeaderSize(blobs)
lastBlobEnd := 0
nonContinuousPack := false
for _, blob := range blobs {
if lastBlobEnd != int(blob.Offset) {
nonContinuousPack = true
}
lastBlobEnd = int(blob.Offset + blob.Length)
}
// size was calculated by masterindex.PackSize, thus there's no need to recalculate it here
var errs []error
if nonContinuousPack {
debug.Log("Index for pack contains gaps / overlaps, blobs: %v", blobs)
errs = append(errs, errors.New("index for pack contains gaps / overlapping blobs"))
}
// calculate hash on-the-fly while reading the pack and capture pack header
var hash restic.ID
var hdrBuf []byte
h := backend.Handle{Type: backend.PackFile, Name: id.String()}
err := r.Backend().Load(ctx, h, int(size), 0, func(rd io.Reader) error {
hrd := hashing.NewReader(rd, sha256.New())
bufRd.Reset(hrd)
it := repository.NewPackBlobIterator(id, bufRd, 0, blobs, r.Key(), dec)
for {
val, err := it.Next()
if err == repository.ErrPackEOF {
break
} else if err != nil {
return &partialReadError{err}
}
debug.Log(" check blob %v: %v", val.Handle.ID, val.Handle)
if val.Err != nil {
debug.Log(" error verifying blob %v: %v", val.Handle.ID, val.Err)
errs = append(errs, errors.Errorf("blob %v: %v", val.Handle.ID, val.Err))
}
}
// skip enough bytes until we reach the possible header start
curPos := lastBlobEnd
minHdrStart := int(size) - pack.MaxHeaderSize
if minHdrStart > curPos {
_, err := bufRd.Discard(minHdrStart - curPos)
if err != nil {
return &partialReadError{err}
}
}
// read remainder, which should be the pack header
var err error
hdrBuf, err = io.ReadAll(bufRd)
if err != nil {
return &partialReadError{err}
}
hash = restic.IDFromHash(hrd.Sum(nil))
return nil
})
if err != nil {
var e *partialReadError
isPartialReadError := errors.As(err, &e)
// failed to load the pack file, return as further checks cannot succeed anyways
debug.Log(" error streaming pack (partial %v): %v", isPartialReadError, err)
if isPartialReadError {
return &ErrPackData{PackID: id, errs: append(errs, errors.Errorf("partial download error: %w", err))}
}
// The check command suggests to repair files for which a `ErrPackData` is returned. However, this file
// completely failed to download such that there's no point in repairing anything.
return errors.Errorf("download error: %w", err)
}
if !hash.Equal(id) {
debug.Log("pack ID does not match, want %v, got %v", id, hash)
return &ErrPackData{PackID: id, errs: append(errs, errors.Errorf("unexpected pack id %v", hash))}
}
blobs, hdrSize, err := pack.List(r.Key(), bytes.NewReader(hdrBuf), int64(len(hdrBuf)))
if err != nil {
return &ErrPackData{PackID: id, errs: append(errs, err)}
}
if uint32(idxHdrSize) != hdrSize {
debug.Log("Pack header size does not match, want %v, got %v", idxHdrSize, hdrSize)
errs = append(errs, errors.Errorf("pack header size does not match, want %v, got %v", idxHdrSize, hdrSize))
}
idx := r.Index()
for _, blob := range blobs {
// Check if blob is contained in index and position is correct
idxHas := false
for _, pb := range idx.Lookup(blob.BlobHandle) {
if pb.PackID == id && pb.Blob == blob {
idxHas = true
break
}
}
if !idxHas {
errs = append(errs, errors.Errorf("blob %v is not contained in index or position is incorrect", blob.ID))
continue
}
}
if len(errs) > 0 {
return &ErrPackData{PackID: id, errs: errs}
}
return nil
}
// ReadData loads all data from the repository and checks the integrity.
func (c *Checker) ReadData(ctx context.Context, errChan chan<- error) {
c.ReadPacks(ctx, c.packs, nil, errChan)
}
const maxStreamBufferSize = 4 * 1024 * 1024
// ReadPacks loads data from specified packs and checks the integrity.
func (c *Checker) ReadPacks(ctx context.Context, packs map[restic.ID]int64, p *progress.Counter, errChan chan<- error) {
defer close(errChan)
@ -675,9 +533,7 @@ func (c *Checker) ReadPacks(ctx context.Context, packs map[restic.ID]int64, p *p
// run workers
for i := 0; i < workerCount; i++ {
g.Go(func() error {
// create a buffer that is large enough to be reused by repository.StreamPack
// this ensures that we can read the pack header later on
bufRd := bufio.NewReaderSize(nil, repository.MaxStreamBufferSize)
bufRd := bufio.NewReaderSize(nil, maxStreamBufferSize)
dec, err := zstd.NewReader(nil)
if err != nil {
panic(dec)
@ -696,7 +552,7 @@ func (c *Checker) ReadPacks(ctx context.Context, packs map[restic.ID]int64, p *p
}
}
err := checkPack(ctx, c.repo, ps.id, ps.blobs, ps.size, bufRd, dec)
err := repository.CheckPack(ctx, c.repo.(*repository.Repository), ps.id, ps.blobs, ps.size, bufRd, dec)
p.Add(1)
if err == nil {
continue

View File

@ -8,6 +8,7 @@ import (
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"testing"
"time"
@ -72,7 +73,7 @@ func assertOnlyMixedPackHints(t *testing.T, hints []error) {
}
func TestCheckRepo(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, checkerTestData)
repo, _, cleanup := repository.TestFromFixture(t, checkerTestData)
defer cleanup()
chkr := checker.New(repo, false)
@ -90,14 +91,11 @@ func TestCheckRepo(t *testing.T) {
}
func TestMissingPack(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, checkerTestData)
repo, be, cleanup := repository.TestFromFixture(t, checkerTestData)
defer cleanup()
packHandle := backend.Handle{
Type: restic.PackFile,
Name: "657f7fb64f6a854fff6fe9279998ee09034901eded4e6db9bcee0e59745bbce6",
}
test.OK(t, repo.Backend().Remove(context.TODO(), packHandle))
packID := restic.TestParseID("657f7fb64f6a854fff6fe9279998ee09034901eded4e6db9bcee0e59745bbce6")
test.OK(t, be.Remove(context.TODO(), backend.Handle{Type: restic.PackFile, Name: packID.String()}))
chkr := checker.New(repo, false)
hints, errs := chkr.LoadIndex(context.TODO(), nil)
@ -112,23 +110,20 @@ func TestMissingPack(t *testing.T) {
"expected exactly one error, got %v", len(errs))
if err, ok := errs[0].(*checker.PackError); ok {
test.Equals(t, packHandle.Name, err.ID.String())
test.Equals(t, packID, err.ID)
} else {
t.Errorf("expected error returned by checker.Packs() to be PackError, got %v", err)
}
}
func TestUnreferencedPack(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, checkerTestData)
repo, be, cleanup := repository.TestFromFixture(t, checkerTestData)
defer cleanup()
// index 3f1a only references pack 60e0
packID := "60e0438dcb978ec6860cc1f8c43da648170ee9129af8f650f876bad19f8f788e"
indexHandle := backend.Handle{
Type: restic.IndexFile,
Name: "3f1abfcb79c6f7d0a3be517d2c83c8562fba64ef2c8e9a3544b4edaf8b5e3b44",
}
test.OK(t, repo.Backend().Remove(context.TODO(), indexHandle))
indexID := restic.TestParseID("3f1abfcb79c6f7d0a3be517d2c83c8562fba64ef2c8e9a3544b4edaf8b5e3b44")
test.OK(t, be.Remove(context.TODO(), backend.Handle{Type: restic.IndexFile, Name: indexID.String()}))
chkr := checker.New(repo, false)
hints, errs := chkr.LoadIndex(context.TODO(), nil)
@ -150,14 +145,11 @@ func TestUnreferencedPack(t *testing.T) {
}
func TestUnreferencedBlobs(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, checkerTestData)
repo, _, cleanup := repository.TestFromFixture(t, checkerTestData)
defer cleanup()
snapshotHandle := backend.Handle{
Type: restic.SnapshotFile,
Name: "51d249d28815200d59e4be7b3f21a157b864dc343353df9d8e498220c2499b02",
}
test.OK(t, repo.Backend().Remove(context.TODO(), snapshotHandle))
snapshotID := restic.TestParseID("51d249d28815200d59e4be7b3f21a157b864dc343353df9d8e498220c2499b02")
test.OK(t, repo.RemoveUnpacked(context.TODO(), restic.SnapshotFile, snapshotID))
unusedBlobsBySnapshot := restic.BlobHandles{
restic.TestParseHandle("58c748bbe2929fdf30c73262bd8313fe828f8925b05d1d4a87fe109082acb849", restic.DataBlob),
@ -188,7 +180,7 @@ func TestUnreferencedBlobs(t *testing.T) {
}
func TestModifiedIndex(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, checkerTestData)
repo, be, cleanup := repository.TestFromFixture(t, checkerTestData)
defer cleanup()
done := make(chan struct{})
@ -216,13 +208,13 @@ func TestModifiedIndex(t *testing.T) {
}()
wr := io.Writer(tmpfile)
var hw *hashing.Writer
if repo.Backend().Hasher() != nil {
hw = hashing.NewWriter(wr, repo.Backend().Hasher())
if be.Hasher() != nil {
hw = hashing.NewWriter(wr, be.Hasher())
wr = hw
}
// read the file from the backend
err = repo.Backend().Load(context.TODO(), h, 0, 0, func(rd io.Reader) error {
err = be.Load(context.TODO(), h, 0, 0, func(rd io.Reader) error {
_, err := io.Copy(wr, rd)
return err
})
@ -244,7 +236,7 @@ func TestModifiedIndex(t *testing.T) {
t.Fatal(err)
}
err = repo.Backend().Save(context.TODO(), h2, rd)
err = be.Save(context.TODO(), h2, rd)
if err != nil {
t.Fatal(err)
}
@ -265,7 +257,7 @@ func TestModifiedIndex(t *testing.T) {
var checkerDuplicateIndexTestData = filepath.Join("testdata", "duplicate-packs-in-index-test-repo.tar.gz")
func TestDuplicatePacksInIndex(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, checkerDuplicateIndexTestData)
repo, _, cleanup := repository.TestFromFixture(t, checkerDuplicateIndexTestData)
defer cleanup()
chkr := checker.New(repo, false)
@ -325,42 +317,91 @@ func induceError(data []byte) {
data[pos] ^= 1
}
// errorOnceBackend randomly modifies data when reading a file for the first time.
type errorOnceBackend struct {
backend.Backend
m sync.Map
}
func (b *errorOnceBackend) Load(ctx context.Context, h backend.Handle, length int, offset int64, consumer func(rd io.Reader) error) error {
_, isRetry := b.m.LoadOrStore(h, struct{}{})
return b.Backend.Load(ctx, h, length, offset, func(rd io.Reader) error {
if !isRetry && h.Type != restic.ConfigFile {
return consumer(errorReadCloser{rd})
}
return consumer(rd)
})
}
func TestCheckerModifiedData(t *testing.T) {
repo := repository.TestRepository(t)
repo, be := repository.TestRepositoryWithVersion(t, 0)
sn := archiver.TestSnapshot(t, repo, ".", nil)
t.Logf("archived as %v", sn.ID().Str())
beError := &errorBackend{Backend: repo.Backend()}
checkRepo := repository.TestOpenBackend(t, beError)
errBe := &errorBackend{Backend: be}
chkr := checker.New(checkRepo, false)
for _, test := range []struct {
name string
be backend.Backend
damage func()
check func(t *testing.T, err error)
}{
{
"errorBackend",
errBe,
func() {
errBe.ProduceErrors = true
},
func(t *testing.T, err error) {
if err == nil {
t.Fatal("no error found, checker is broken")
}
},
},
{
"errorOnceBackend",
&errorOnceBackend{Backend: be},
func() {},
func(t *testing.T, err error) {
if !strings.Contains(err.Error(), "check successful on second attempt, original error pack") {
t.Fatalf("wrong error found, got %v", err)
}
},
},
} {
t.Run(test.name, func(t *testing.T) {
checkRepo := repository.TestOpenBackend(t, test.be)
hints, errs := chkr.LoadIndex(context.TODO(), nil)
if len(errs) > 0 {
t.Fatalf("expected no errors, got %v: %v", len(errs), errs)
}
chkr := checker.New(checkRepo, false)
if len(hints) > 0 {
t.Errorf("expected no hints, got %v: %v", len(hints), hints)
}
hints, errs := chkr.LoadIndex(context.TODO(), nil)
if len(errs) > 0 {
t.Fatalf("expected no errors, got %v: %v", len(errs), errs)
}
beError.ProduceErrors = true
errFound := false
for _, err := range checkPacks(chkr) {
t.Logf("pack error: %v", err)
}
if len(hints) > 0 {
t.Errorf("expected no hints, got %v: %v", len(hints), hints)
}
for _, err := range checkStruct(chkr) {
t.Logf("struct error: %v", err)
}
test.damage()
var err error
for _, err := range checkPacks(chkr) {
t.Logf("pack error: %v", err)
}
for _, err := range checkData(chkr) {
t.Logf("data error: %v", err)
errFound = true
}
for _, err := range checkStruct(chkr) {
t.Logf("struct error: %v", err)
}
if !errFound {
t.Fatal("no error found, checker is broken")
for _, cerr := range checkData(chkr) {
t.Logf("data error: %v", cerr)
if err == nil {
err = cerr
}
}
test.check(t, err)
})
}
}
@ -386,7 +427,7 @@ func (r *loadTreesOnceRepository) LoadTree(ctx context.Context, id restic.ID) (*
}
func TestCheckerNoDuplicateTreeDecodes(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, checkerTestData)
repo, _, cleanup := repository.TestFromFixture(t, checkerTestData)
defer cleanup()
checkRepo := &loadTreesOnceRepository{
Repository: repo,
@ -534,7 +575,7 @@ func TestCheckerBlobTypeConfusion(t *testing.T) {
}
func loadBenchRepository(t *testing.B) (*checker.Checker, restic.Repository, func()) {
repo, cleanup := repository.TestFromFixture(t, checkerTestData)
repo, _, cleanup := repository.TestFromFixture(t, checkerTestData)
chkr := checker.New(repo, false)
hints, errs := chkr.LoadIndex(context.TODO(), nil)

View File

@ -299,7 +299,7 @@ func (k *Key) Open(dst, nonce, ciphertext, _ []byte) ([]byte, error) {
// check for plausible length
if len(ciphertext) < k.Overhead() {
return nil, errors.Errorf("trying to decrypt invalid data: ciphertext too small")
return nil, errors.Errorf("trying to decrypt invalid data: ciphertext too short")
}
l := len(ciphertext) - macSize

View File

@ -8,8 +8,6 @@ import (
"path/filepath"
"runtime"
"strings"
"github.com/restic/restic/internal/fs"
)
var opts struct {
@ -46,7 +44,7 @@ func initDebugLogger() {
fmt.Fprintf(os.Stderr, "debug log file %v\n", debugfile)
f, err := fs.OpenFile(debugfile, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0600)
f, err := os.OpenFile(debugfile, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0600)
if err != nil {
fmt.Fprintf(os.Stderr, "unable to open debug log file: %v\n", err)
os.Exit(2)

View File

@ -9,6 +9,7 @@ import (
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/walker"
"golang.org/x/sync/errgroup"
)
// A Dumper writes trees and files from a repository to a Writer
@ -16,11 +17,11 @@ import (
type Dumper struct {
cache *bloblru.Cache
format string
repo restic.BlobLoader
repo restic.Loader
w io.Writer
}
func New(format string, repo restic.BlobLoader, w io.Writer) *Dumper {
func New(format string, repo restic.Loader, w io.Writer) *Dumper {
return &Dumper{
cache: bloblru.New(64 << 20),
format: format,
@ -103,27 +104,77 @@ func (d *Dumper) WriteNode(ctx context.Context, node *restic.Node) error {
}
func (d *Dumper) writeNode(ctx context.Context, w io.Writer, node *restic.Node) error {
var (
buf []byte
err error
)
for _, id := range node.Content {
blob, ok := d.cache.Get(id)
if !ok {
blob, err = d.repo.LoadBlob(ctx, restic.DataBlob, id, buf)
if err != nil {
return err
}
buf = d.cache.Add(id, blob) // Reuse evicted buffer.
}
if _, err := w.Write(blob); err != nil {
return errors.Wrap(err, "Write")
}
type loadTask struct {
id restic.ID
out chan<- []byte
}
type writeTask struct {
data <-chan []byte
}
return nil
loaderCh := make(chan loadTask)
// per worker: allows for one blob that gets download + one blob thats queue for writing
writerCh := make(chan writeTask, d.repo.Connections()*2)
wg, ctx := errgroup.WithContext(ctx)
wg.Go(func() error {
defer close(loaderCh)
defer close(writerCh)
for _, id := range node.Content {
// non-blocking blob handover to allow the loader to load the next blob
// while the old one is still written
ch := make(chan []byte, 1)
select {
case loaderCh <- loadTask{id: id, out: ch}:
case <-ctx.Done():
return ctx.Err()
}
select {
case writerCh <- writeTask{data: ch}:
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
for i := uint(0); i < d.repo.Connections(); i++ {
wg.Go(func() error {
for task := range loaderCh {
blob, err := d.cache.GetOrCompute(task.id, func() ([]byte, error) {
return d.repo.LoadBlob(ctx, restic.DataBlob, task.id, nil)
})
if err != nil {
return err
}
select {
case task.out <- blob:
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
}
wg.Go(func() error {
for result := range writerCh {
select {
case data := <-result.data:
if _, err := w.Write(data); err != nil {
return errors.Wrap(err, "Write")
}
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
return wg.Wait()
}
// IsDir checks if the given node is a directory.

View File

@ -43,22 +43,28 @@ func Is(x, y error) bool { return stderrors.Is(x, y) }
// unwrap errors returned by [Join].
func Unwrap(err error) error { return stderrors.Unwrap(err) }
// CombineErrors combines multiple errors into a single error.
func CombineErrors(errors ...error) error {
// CombineErrors combines multiple errors into a single error after filtering out any nil values.
// If no errors are passed, it returns nil.
// If one error is passed, it simply returns that same error.
func CombineErrors(errors ...error) (err error) {
var combinedErrorMsg string
for _, err := range errors {
if err != nil {
var multipleErrors bool
for _, errVal := range errors {
if errVal != nil {
if combinedErrorMsg != "" {
combinedErrorMsg += "; " // Separate error messages with a delimiter
multipleErrors = true
} else {
// Set the first error
err = errVal
}
combinedErrorMsg += err.Error()
combinedErrorMsg += errVal.Error()
}
}
if combinedErrorMsg == "" {
return nil // No errors, return nil
return nil // If no errors, return nil
} else if !multipleErrors {
return err // If only one error, return that first error
}
return fmt.Errorf("multiple errors occurred: [%s]", combinedErrorMsg)
}

View File

@ -5,6 +5,7 @@ var Flag = New()
// flag names are written in kebab-case
const (
BackendErrorRedesign FlagName = "backend-error-redesign"
DeprecateLegacyIndex FlagName = "deprecate-legacy-index"
DeprecateS3LegacyLayout FlagName = "deprecate-s3-legacy-layout"
DeviceIDForHardlinks FlagName = "device-id-for-hardlinks"
@ -12,6 +13,7 @@ const (
func init() {
Flag.SetFlags(map[FlagName]FlagDesc{
BackendErrorRedesign: {Type: Beta, Description: "enforce timeouts for stuck HTTP requests and use new backend error handling design."},
DeprecateLegacyIndex: {Type: Beta, Description: "disable support for index format used by restic 0.1.0. Use `restic repair index` to update the index if necessary."},
DeprecateS3LegacyLayout: {Type: Beta, Description: "disable support for S3 legacy layout used up to restic 0.7.0. Use `RESTIC_FEATURES=deprecate-s3-legacy-layout=false restic migrate s3_layout` to migrate your S3 repository if necessary."},
DeviceIDForHardlinks: {Type: Alpha, Description: "store deviceID only for hardlinks to reduce metadata changes for example when using btrfs subvolumes. Will be removed in a future restic version after repository format 3 is available"},

View File

@ -3,41 +3,108 @@ package fs
import (
"os"
"path/filepath"
"runtime"
"strings"
"sync"
"time"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/options"
)
// ErrorHandler is used to report errors via callback
type ErrorHandler func(item string, err error) error
// VSSConfig holds extended options of windows volume shadow copy service.
type VSSConfig struct {
ExcludeAllMountPoints bool `option:"exclude-all-mount-points" help:"exclude mountpoints from snapshotting on all volumes"`
ExcludeVolumes string `option:"exclude-volumes" help:"semicolon separated list of volumes to exclude from snapshotting (ex. 'c:\\;e:\\mnt;\\\\?\\Volume{...}')"`
Timeout time.Duration `option:"timeout" help:"time that the VSS can spend creating snapshot before timing out"`
Provider string `option:"provider" help:"VSS provider identifier which will be used for snapshotting"`
}
func init() {
if runtime.GOOS == "windows" {
options.Register("vss", VSSConfig{})
}
}
// NewVSSConfig returns a new VSSConfig with the default values filled in.
func NewVSSConfig() VSSConfig {
return VSSConfig{
Timeout: time.Second * 120,
}
}
// ParseVSSConfig parses a VSS extended options to VSSConfig struct.
func ParseVSSConfig(o options.Options) (VSSConfig, error) {
cfg := NewVSSConfig()
o = o.Extract("vss")
if err := o.Apply("vss", &cfg); err != nil {
return VSSConfig{}, err
}
return cfg, nil
}
// ErrorHandler is used to report errors via callback.
type ErrorHandler func(item string, err error)
// MessageHandler is used to report errors/messages via callbacks.
type MessageHandler func(msg string, args ...interface{})
// VolumeFilter is used to filter volumes by it's mount point or GUID path.
type VolumeFilter func(volume string) bool
// LocalVss is a wrapper around the local file system which uses windows volume
// shadow copy service (VSS) in a transparent way.
type LocalVss struct {
FS
snapshots map[string]VssSnapshot
failedSnapshots map[string]struct{}
mutex sync.RWMutex
msgError ErrorHandler
msgMessage MessageHandler
snapshots map[string]VssSnapshot
failedSnapshots map[string]struct{}
mutex sync.RWMutex
msgError ErrorHandler
msgMessage MessageHandler
excludeAllMountPoints bool
excludeVolumes map[string]struct{}
timeout time.Duration
provider string
}
// statically ensure that LocalVss implements FS.
var _ FS = &LocalVss{}
// parseMountPoints try to convert semicolon separated list of mount points
// to map of lowercased volume GUID pathes. Mountpoints already in volume
// GUID path format will be validated and normalized.
func parseMountPoints(list string, msgError ErrorHandler) (volumes map[string]struct{}) {
if list == "" {
return
}
for _, s := range strings.Split(list, ";") {
if v, err := GetVolumeNameForVolumeMountPoint(s); err != nil {
msgError(s, errors.Errorf("failed to parse vss.exclude-volumes [%s]: %s", s, err))
} else {
if volumes == nil {
volumes = make(map[string]struct{})
}
volumes[strings.ToLower(v)] = struct{}{}
}
}
return
}
// NewLocalVss creates a new wrapper around the windows filesystem using volume
// shadow copy service to access locked files.
func NewLocalVss(msgError ErrorHandler, msgMessage MessageHandler) *LocalVss {
func NewLocalVss(msgError ErrorHandler, msgMessage MessageHandler, cfg VSSConfig) *LocalVss {
return &LocalVss{
FS: Local{},
snapshots: make(map[string]VssSnapshot),
failedSnapshots: make(map[string]struct{}),
msgError: msgError,
msgMessage: msgMessage,
FS: Local{},
snapshots: make(map[string]VssSnapshot),
failedSnapshots: make(map[string]struct{}),
msgError: msgError,
msgMessage: msgMessage,
excludeAllMountPoints: cfg.ExcludeAllMountPoints,
excludeVolumes: parseMountPoints(cfg.ExcludeVolumes, msgError),
timeout: cfg.Timeout,
provider: cfg.Provider,
}
}
@ -50,7 +117,7 @@ func (fs *LocalVss) DeleteSnapshots() {
for volumeName, snapshot := range fs.snapshots {
if err := snapshot.Delete(); err != nil {
_ = fs.msgError(volumeName, errors.Errorf("failed to delete VSS snapshot: %s", err))
fs.msgError(volumeName, errors.Errorf("failed to delete VSS snapshot: %s", err))
activeSnapshots[volumeName] = snapshot
}
}
@ -78,12 +145,27 @@ func (fs *LocalVss) Lstat(name string) (os.FileInfo, error) {
return os.Lstat(fs.snapshotPath(name))
}
// isMountPointIncluded is true if given mountpoint included by user.
func (fs *LocalVss) isMountPointIncluded(mountPoint string) bool {
if fs.excludeVolumes == nil {
return true
}
volume, err := GetVolumeNameForVolumeMountPoint(mountPoint)
if err != nil {
fs.msgError(mountPoint, errors.Errorf("failed to get volume from mount point [%s]: %s", mountPoint, err))
return true
}
_, ok := fs.excludeVolumes[strings.ToLower(volume)]
return !ok
}
// snapshotPath returns the path inside a VSS snapshots if it already exists.
// If the path is not yet available as a snapshot, a snapshot is created.
// If creation of a snapshot fails the file's original path is returned as
// a fallback.
func (fs *LocalVss) snapshotPath(path string) string {
fixPath := fixpath(path)
if strings.HasPrefix(fixPath, `\\?\UNC\`) {
@ -114,23 +196,36 @@ func (fs *LocalVss) snapshotPath(path string) string {
if !snapshotExists && !snapshotFailed {
vssVolume := volumeNameLower + string(filepath.Separator)
fs.msgMessage("creating VSS snapshot for [%s]\n", vssVolume)
if snapshot, err := NewVssSnapshot(vssVolume, 120, fs.msgError); err != nil {
_ = fs.msgError(vssVolume, errors.Errorf("failed to create snapshot for [%s]: %s",
vssVolume, err))
if !fs.isMountPointIncluded(vssVolume) {
fs.msgMessage("snapshots for [%s] excluded by user\n", vssVolume)
fs.failedSnapshots[volumeNameLower] = struct{}{}
} else {
fs.snapshots[volumeNameLower] = snapshot
fs.msgMessage("successfully created snapshot for [%s]\n", vssVolume)
if len(snapshot.mountPointInfo) > 0 {
fs.msgMessage("mountpoints in snapshot volume [%s]:\n", vssVolume)
for mp, mpInfo := range snapshot.mountPointInfo {
info := ""
if !mpInfo.IsSnapshotted() {
info = " (not snapshotted)"
fs.msgMessage("creating VSS snapshot for [%s]\n", vssVolume)
var includeVolume VolumeFilter
if !fs.excludeAllMountPoints {
includeVolume = func(volume string) bool {
return fs.isMountPointIncluded(volume)
}
}
if snapshot, err := NewVssSnapshot(fs.provider, vssVolume, fs.timeout, includeVolume, fs.msgError); err != nil {
fs.msgError(vssVolume, errors.Errorf("failed to create snapshot for [%s]: %s",
vssVolume, err))
fs.failedSnapshots[volumeNameLower] = struct{}{}
} else {
fs.snapshots[volumeNameLower] = snapshot
fs.msgMessage("successfully created snapshot for [%s]\n", vssVolume)
if len(snapshot.mountPointInfo) > 0 {
fs.msgMessage("mountpoints in snapshot volume [%s]:\n", vssVolume)
for mp, mpInfo := range snapshot.mountPointInfo {
info := ""
if !mpInfo.IsSnapshotted() {
info = " (not snapshotted)"
}
fs.msgMessage(" - %s%s\n", mp, info)
}
fs.msgMessage(" - %s%s\n", mp, info)
}
}
}
@ -173,9 +268,8 @@ func (fs *LocalVss) snapshotPath(path string) string {
snapshotPath = fs.Join(snapshot.GetSnapshotDeviceObject(),
strings.TrimPrefix(fixPath, volumeName))
if snapshotPath == snapshot.GetSnapshotDeviceObject() {
snapshotPath = snapshotPath + string(filepath.Separator)
snapshotPath += string(filepath.Separator)
}
} else {
// no snapshot is available for the requested path:
// -> try to backup without a snapshot

View File

@ -0,0 +1,285 @@
// +build windows
package fs
import (
"fmt"
"regexp"
"strings"
"testing"
"time"
ole "github.com/go-ole/go-ole"
"github.com/restic/restic/internal/options"
)
func matchStrings(ptrs []string, strs []string) bool {
if len(ptrs) != len(strs) {
return false
}
for i, p := range ptrs {
if p == "" {
return false
}
matched, err := regexp.MatchString(p, strs[i])
if err != nil {
panic(err)
}
if !matched {
return false
}
}
return true
}
func matchMap(strs []string, m map[string]struct{}) bool {
if len(strs) != len(m) {
return false
}
for _, s := range strs {
if _, ok := m[s]; !ok {
return false
}
}
return true
}
func TestVSSConfig(t *testing.T) {
type config struct {
excludeAllMountPoints bool
timeout time.Duration
provider string
}
setTests := []struct {
input options.Options
output config
}{
{
options.Options{
"vss.timeout": "6h38m42s",
"vss.provider": "Ms",
},
config{
timeout: 23922000000000,
provider: "Ms",
},
},
{
options.Options{
"vss.exclude-all-mount-points": "t",
"vss.provider": "{b5946137-7b9f-4925-af80-51abd60b20d5}",
},
config{
excludeAllMountPoints: true,
timeout: 120000000000,
provider: "{b5946137-7b9f-4925-af80-51abd60b20d5}",
},
},
{
options.Options{
"vss.exclude-all-mount-points": "0",
"vss.exclude-volumes": "",
"vss.timeout": "120s",
"vss.provider": "Microsoft Software Shadow Copy provider 1.0",
},
config{
timeout: 120000000000,
provider: "Microsoft Software Shadow Copy provider 1.0",
},
},
}
for i, test := range setTests {
t.Run(fmt.Sprintf("test-%d", i), func(t *testing.T) {
cfg, err := ParseVSSConfig(test.input)
if err != nil {
t.Fatal(err)
}
errorHandler := func(item string, err error) {
t.Fatalf("unexpected error (%v)", err)
}
messageHandler := func(msg string, args ...interface{}) {
t.Fatalf("unexpected message (%s)", fmt.Sprintf(msg, args))
}
dst := NewLocalVss(errorHandler, messageHandler, cfg)
if dst.excludeAllMountPoints != test.output.excludeAllMountPoints ||
dst.excludeVolumes != nil || dst.timeout != test.output.timeout ||
dst.provider != test.output.provider {
t.Fatalf("wrong result, want:\n %#v\ngot:\n %#v", test.output, dst)
}
})
}
}
func TestParseMountPoints(t *testing.T) {
volumeMatch := regexp.MustCompile(`^\\\\\?\\Volume\{[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12}\}\\$`)
// It's not a good idea to test functions based on GetVolumeNameForVolumeMountPoint by calling
// GetVolumeNameForVolumeMountPoint itself, but we have restricted test environment:
// cannot manage volumes and can only be sure that the mount point C:\ exists
sysVolume, err := GetVolumeNameForVolumeMountPoint("C:")
if err != nil {
t.Fatal(err)
}
// We don't know a valid volume GUID path for c:\, but we'll at least check its format
if !volumeMatch.MatchString(sysVolume) {
t.Fatalf("invalid volume GUID path: %s", sysVolume)
}
// Changing the case and removing trailing backslash allows tests
// the equality of different ways of writing a volume name
sysVolumeMutated := strings.ToUpper(sysVolume[:len(sysVolume)-1])
sysVolumeMatch := strings.ToLower(sysVolume)
type check struct {
volume string
result bool
}
setTests := []struct {
input options.Options
output []string
checks []check
errors []string
}{
{
options.Options{
"vss.exclude-volumes": `c:;c:\;` + sysVolume + `;` + sysVolumeMutated,
},
[]string{
sysVolumeMatch,
},
[]check{
{`c:\`, false},
{`c:`, false},
{sysVolume, false},
{sysVolumeMutated, false},
},
[]string{},
},
{
options.Options{
"vss.exclude-volumes": `z:\nonexistent;c:;c:\windows\;\\?\Volume{39b9cac2-bcdb-4d51-97c8-0d0677d607fb}\`,
},
[]string{
sysVolumeMatch,
},
[]check{
{`c:\windows\`, true},
{`\\?\Volume{39b9cac2-bcdb-4d51-97c8-0d0677d607fb}\`, true},
{`c:`, false},
{``, true},
},
[]string{
`failed to parse vss\.exclude-volumes \[z:\\nonexistent\]:.*`,
`failed to parse vss\.exclude-volumes \[c:\\windows\\\]:.*`,
`failed to parse vss\.exclude-volumes \[\\\\\?\\Volume\{39b9cac2-bcdb-4d51-97c8-0d0677d607fb\}\\\]:.*`,
`failed to get volume from mount point \[c:\\windows\\\]:.*`,
`failed to get volume from mount point \[\\\\\?\\Volume\{39b9cac2-bcdb-4d51-97c8-0d0677d607fb\}\\\]:.*`,
`failed to get volume from mount point \[\]:.*`,
},
},
}
for i, test := range setTests {
t.Run(fmt.Sprintf("test-%d", i), func(t *testing.T) {
cfg, err := ParseVSSConfig(test.input)
if err != nil {
t.Fatal(err)
}
var log []string
errorHandler := func(item string, err error) {
log = append(log, strings.TrimSpace(err.Error()))
}
messageHandler := func(msg string, args ...interface{}) {
t.Fatalf("unexpected message (%s)", fmt.Sprintf(msg, args))
}
dst := NewLocalVss(errorHandler, messageHandler, cfg)
if !matchMap(test.output, dst.excludeVolumes) {
t.Fatalf("wrong result, want:\n %#v\ngot:\n %#v",
test.output, dst.excludeVolumes)
}
for _, c := range test.checks {
if dst.isMountPointIncluded(c.volume) != c.result {
t.Fatalf(`wrong check: isMountPointIncluded("%s") != %v`, c.volume, c.result)
}
}
if !matchStrings(test.errors, log) {
t.Fatalf("wrong log, want:\n %#v\ngot:\n %#v", test.errors, log)
}
})
}
}
func TestParseProvider(t *testing.T) {
msProvider := ole.NewGUID("{b5946137-7b9f-4925-af80-51abd60b20d5}")
setTests := []struct {
provider string
id *ole.GUID
result string
}{
{
"",
ole.IID_NULL,
"",
},
{
"mS",
msProvider,
"",
},
{
"{B5946137-7b9f-4925-Af80-51abD60b20d5}",
msProvider,
"",
},
{
"Microsoft Software Shadow Copy provider 1.0",
msProvider,
"",
},
{
"{04560982-3d7d-4bbc-84f7-0712f833a28f}",
nil,
`invalid VSS provider "{04560982-3d7d-4bbc-84f7-0712f833a28f}"`,
},
{
"non-existent provider",
nil,
`invalid VSS provider "non-existent provider"`,
},
}
_ = ole.CoInitializeEx(0, ole.COINIT_MULTITHREADED)
for i, test := range setTests {
t.Run(fmt.Sprintf("test-%d", i), func(t *testing.T) {
id, err := getProviderID(test.provider)
if err != nil && id != nil {
t.Fatalf("err!=nil but id=%v", id)
}
if test.result != "" || err != nil {
var result string
if err != nil {
result = err.Error()
}
if test.result != result || test.result == "" {
t.Fatalf("wrong result, want:\n %#v\ngot:\n %#v", test.result, result)
}
} else if !ole.IsEqualGUID(id, test.id) {
t.Fatalf("wrong id, want:\n %s\ngot:\n %s", test.id.String(), id.String())
}
})
}
}

439
internal/fs/sd_windows.go Normal file
View File

@ -0,0 +1,439 @@
package fs
import (
"bytes"
"encoding/binary"
"fmt"
"sync"
"sync/atomic"
"syscall"
"unicode/utf16"
"unsafe"
"github.com/restic/restic/internal/debug"
"golang.org/x/sys/windows"
)
var (
onceBackup sync.Once
onceRestore sync.Once
// SeBackupPrivilege allows the application to bypass file and directory ACLs to back up files and directories.
SeBackupPrivilege = "SeBackupPrivilege"
// SeRestorePrivilege allows the application to bypass file and directory ACLs to restore files and directories.
SeRestorePrivilege = "SeRestorePrivilege"
// SeSecurityPrivilege allows read and write access to all SACLs.
SeSecurityPrivilege = "SeSecurityPrivilege"
// SeTakeOwnershipPrivilege allows the application to take ownership of files and directories, regardless of the permissions set on them.
SeTakeOwnershipPrivilege = "SeTakeOwnershipPrivilege"
lowerPrivileges atomic.Bool
)
// Flags for backup and restore with admin permissions
var highSecurityFlags windows.SECURITY_INFORMATION = windows.OWNER_SECURITY_INFORMATION | windows.GROUP_SECURITY_INFORMATION | windows.DACL_SECURITY_INFORMATION | windows.SACL_SECURITY_INFORMATION | windows.LABEL_SECURITY_INFORMATION | windows.ATTRIBUTE_SECURITY_INFORMATION | windows.SCOPE_SECURITY_INFORMATION | windows.BACKUP_SECURITY_INFORMATION | windows.PROTECTED_DACL_SECURITY_INFORMATION | windows.PROTECTED_SACL_SECURITY_INFORMATION | windows.UNPROTECTED_DACL_SECURITY_INFORMATION | windows.UNPROTECTED_SACL_SECURITY_INFORMATION
// Flags for backup without admin permissions. If there are no admin permissions, only the current user's owner, group and DACL will be backed up.
var lowBackupSecurityFlags windows.SECURITY_INFORMATION = windows.OWNER_SECURITY_INFORMATION | windows.GROUP_SECURITY_INFORMATION | windows.DACL_SECURITY_INFORMATION | windows.LABEL_SECURITY_INFORMATION | windows.ATTRIBUTE_SECURITY_INFORMATION | windows.SCOPE_SECURITY_INFORMATION | windows.PROTECTED_DACL_SECURITY_INFORMATION | windows.UNPROTECTED_DACL_SECURITY_INFORMATION
// Flags for restore without admin permissions. If there are no admin permissions, only the DACL from the SD can be restored and owner and group will be set based on the current user.
var lowRestoreSecurityFlags windows.SECURITY_INFORMATION = windows.DACL_SECURITY_INFORMATION | windows.ATTRIBUTE_SECURITY_INFORMATION | windows.PROTECTED_DACL_SECURITY_INFORMATION
// GetSecurityDescriptor takes the path of the file and returns the SecurityDescriptor for the file.
// This needs admin permissions or SeBackupPrivilege for getting the full SD.
// If there are no admin permissions, only the current user's owner, group and DACL will be got.
func GetSecurityDescriptor(filePath string) (securityDescriptor *[]byte, err error) {
onceBackup.Do(enableBackupPrivilege)
var sd *windows.SECURITY_DESCRIPTOR
if lowerPrivileges.Load() {
sd, err = getNamedSecurityInfoLow(filePath)
} else {
sd, err = getNamedSecurityInfoHigh(filePath)
}
if err != nil {
if !lowerPrivileges.Load() && isHandlePrivilegeNotHeldError(err) {
// If ERROR_PRIVILEGE_NOT_HELD is encountered, fallback to backups/restores using lower non-admin privileges.
lowerPrivileges.Store(true)
sd, err = getNamedSecurityInfoLow(filePath)
if err != nil {
return nil, fmt.Errorf("get low-level named security info failed with: %w", err)
}
} else {
return nil, fmt.Errorf("get named security info failed with: %w", err)
}
}
sdBytes, err := securityDescriptorStructToBytes(sd)
if err != nil {
return nil, fmt.Errorf("convert security descriptor to bytes failed: %w", err)
}
return &sdBytes, nil
}
// SetSecurityDescriptor sets the SecurityDescriptor for the file at the specified path.
// This needs admin permissions or SeRestorePrivilege, SeSecurityPrivilege and SeTakeOwnershipPrivilege
// for setting the full SD.
// If there are no admin permissions/required privileges, only the DACL from the SD can be set and
// owner and group will be set based on the current user.
func SetSecurityDescriptor(filePath string, securityDescriptor *[]byte) error {
onceRestore.Do(enableRestorePrivilege)
// Set the security descriptor on the file
sd, err := SecurityDescriptorBytesToStruct(*securityDescriptor)
if err != nil {
return fmt.Errorf("error converting bytes to security descriptor: %w", err)
}
owner, _, err := sd.Owner()
if err != nil {
//Do not set partial values.
owner = nil
}
group, _, err := sd.Group()
if err != nil {
//Do not set partial values.
group = nil
}
dacl, _, err := sd.DACL()
if err != nil {
//Do not set partial values.
dacl = nil
}
sacl, _, err := sd.SACL()
if err != nil {
//Do not set partial values.
sacl = nil
}
if lowerPrivileges.Load() {
err = setNamedSecurityInfoLow(filePath, dacl)
} else {
err = setNamedSecurityInfoHigh(filePath, owner, group, dacl, sacl)
}
if err != nil {
if !lowerPrivileges.Load() && isHandlePrivilegeNotHeldError(err) {
// If ERROR_PRIVILEGE_NOT_HELD is encountered, fallback to backups/restores using lower non-admin privileges.
lowerPrivileges.Store(true)
err = setNamedSecurityInfoLow(filePath, dacl)
if err != nil {
return fmt.Errorf("set low-level named security info failed with: %w", err)
}
} else {
return fmt.Errorf("set named security info failed with: %w", err)
}
}
return nil
}
// getNamedSecurityInfoHigh gets the higher level SecurityDescriptor which requires admin permissions.
func getNamedSecurityInfoHigh(filePath string) (*windows.SECURITY_DESCRIPTOR, error) {
return windows.GetNamedSecurityInfo(filePath, windows.SE_FILE_OBJECT, highSecurityFlags)
}
// getNamedSecurityInfoLow gets the lower level SecurityDescriptor which requires no admin permissions.
func getNamedSecurityInfoLow(filePath string) (*windows.SECURITY_DESCRIPTOR, error) {
return windows.GetNamedSecurityInfo(filePath, windows.SE_FILE_OBJECT, lowBackupSecurityFlags)
}
// setNamedSecurityInfoHigh sets the higher level SecurityDescriptor which requires admin permissions.
func setNamedSecurityInfoHigh(filePath string, owner *windows.SID, group *windows.SID, dacl *windows.ACL, sacl *windows.ACL) error {
return windows.SetNamedSecurityInfo(filePath, windows.SE_FILE_OBJECT, highSecurityFlags, owner, group, dacl, sacl)
}
// setNamedSecurityInfoLow sets the lower level SecurityDescriptor which requires no admin permissions.
func setNamedSecurityInfoLow(filePath string, dacl *windows.ACL) error {
return windows.SetNamedSecurityInfo(filePath, windows.SE_FILE_OBJECT, lowRestoreSecurityFlags, nil, nil, dacl, nil)
}
// enableBackupPrivilege enables privilege for backing up security descriptors
func enableBackupPrivilege() {
err := enableProcessPrivileges([]string{SeBackupPrivilege})
if err != nil {
debug.Log("error enabling backup privilege: %v", err)
}
}
// enableBackupPrivilege enables privilege for restoring security descriptors
func enableRestorePrivilege() {
err := enableProcessPrivileges([]string{SeRestorePrivilege, SeSecurityPrivilege, SeTakeOwnershipPrivilege})
if err != nil {
debug.Log("error enabling restore/security privilege: %v", err)
}
}
// isHandlePrivilegeNotHeldError checks if the error is ERROR_PRIVILEGE_NOT_HELD
func isHandlePrivilegeNotHeldError(err error) bool {
// Use a type assertion to check if the error is of type syscall.Errno
if errno, ok := err.(syscall.Errno); ok {
// Compare the error code to the expected value
return errno == windows.ERROR_PRIVILEGE_NOT_HELD
}
return false
}
// SecurityDescriptorBytesToStruct converts the security descriptor bytes representation
// into a pointer to windows SECURITY_DESCRIPTOR.
func SecurityDescriptorBytesToStruct(sd []byte) (*windows.SECURITY_DESCRIPTOR, error) {
if l := int(unsafe.Sizeof(windows.SECURITY_DESCRIPTOR{})); len(sd) < l {
return nil, fmt.Errorf("securityDescriptor (%d) smaller than expected (%d): %w", len(sd), l, windows.ERROR_INCORRECT_SIZE)
}
s := (*windows.SECURITY_DESCRIPTOR)(unsafe.Pointer(&sd[0]))
return s, nil
}
// securityDescriptorStructToBytes converts the pointer to windows SECURITY_DESCRIPTOR
// into a security descriptor bytes representation.
func securityDescriptorStructToBytes(sd *windows.SECURITY_DESCRIPTOR) ([]byte, error) {
b := unsafe.Slice((*byte)(unsafe.Pointer(sd)), sd.Length())
return b, nil
}
// The code below was adapted from
// https://github.com/microsoft/go-winio/blob/3c9576c9346a1892dee136329e7e15309e82fb4f/privilege.go
// under MIT license.
// The MIT License (MIT)
// Copyright (c) 2015 Microsoft
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
var (
modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
procLookupPrivilegeValueW = modadvapi32.NewProc("LookupPrivilegeValueW")
procAdjustTokenPrivileges = modadvapi32.NewProc("AdjustTokenPrivileges")
procLookupPrivilegeDisplayNameW = modadvapi32.NewProc("LookupPrivilegeDisplayNameW")
procLookupPrivilegeNameW = modadvapi32.NewProc("LookupPrivilegeNameW")
)
// Do the interface allocations only once for common
// Errno values.
const (
errnoErrorIOPending = 997
//revive:disable-next-line:var-naming ALL_CAPS
SE_PRIVILEGE_ENABLED = windows.SE_PRIVILEGE_ENABLED
//revive:disable-next-line:var-naming ALL_CAPS
ERROR_NOT_ALL_ASSIGNED windows.Errno = windows.ERROR_NOT_ALL_ASSIGNED
)
var (
errErrorIOPending error = syscall.Errno(errnoErrorIOPending)
errErrorEinval error = syscall.EINVAL
privNames = make(map[string]uint64)
privNameMutex sync.Mutex
)
// PrivilegeError represents an error enabling privileges.
type PrivilegeError struct {
privileges []uint64
}
// Error returns the string message for the error.
func (e *PrivilegeError) Error() string {
s := "Could not enable privilege "
if len(e.privileges) > 1 {
s = "Could not enable privileges "
}
for i, p := range e.privileges {
if i != 0 {
s += ", "
}
s += `"`
s += getPrivilegeName(p)
s += `"`
}
return s
}
func mapPrivileges(names []string) ([]uint64, error) {
privileges := make([]uint64, 0, len(names))
privNameMutex.Lock()
defer privNameMutex.Unlock()
for _, name := range names {
p, ok := privNames[name]
if !ok {
err := lookupPrivilegeValue("", name, &p)
if err != nil {
return nil, err
}
privNames[name] = p
}
privileges = append(privileges, p)
}
return privileges, nil
}
// enableProcessPrivileges enables privileges globally for the process.
func enableProcessPrivileges(names []string) error {
return enableDisableProcessPrivilege(names, SE_PRIVILEGE_ENABLED)
}
func enableDisableProcessPrivilege(names []string, action uint32) error {
privileges, err := mapPrivileges(names)
if err != nil {
return err
}
p := windows.CurrentProcess()
var token windows.Token
err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token)
if err != nil {
return err
}
defer func() {
_ = token.Close()
}()
return adjustPrivileges(token, privileges, action)
}
func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) error {
var b bytes.Buffer
_ = binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
for _, p := range privileges {
_ = binary.Write(&b, binary.LittleEndian, p)
_ = binary.Write(&b, binary.LittleEndian, action)
}
prevState := make([]byte, b.Len())
reqSize := uint32(0)
success, err := adjustTokenPrivileges(token, false, &b.Bytes()[0], uint32(len(prevState)), &prevState[0], &reqSize)
if !success {
return err
}
if err == ERROR_NOT_ALL_ASSIGNED { //nolint:errorlint // err is Errno
debug.Log("Not all requested privileges were fully set: %v. AdjustTokenPrivileges returned warning: %v", privileges, err)
}
return nil
}
func getPrivilegeName(luid uint64) string {
var nameBuffer [256]uint16
bufSize := uint32(len(nameBuffer))
err := lookupPrivilegeName("", &luid, &nameBuffer[0], &bufSize)
if err != nil {
return fmt.Sprintf("<unknown privilege %d>", luid)
}
var displayNameBuffer [256]uint16
displayBufSize := uint32(len(displayNameBuffer))
var langID uint32
err = lookupPrivilegeDisplayName("", &nameBuffer[0], &displayNameBuffer[0], &displayBufSize, &langID)
if err != nil {
return fmt.Sprintf("<unknown privilege %s>", string(utf16.Decode(nameBuffer[:bufSize])))
}
return string(utf16.Decode(displayNameBuffer[:displayBufSize]))
}
// The functions below are copied over from https://github.com/microsoft/go-winio/blob/main/zsyscall_windows.go
// This windows api always returns an error even in case of success, warnings (partial success) and error cases.
//
// Full success - When we call this with admin permissions, it returns DNS_ERROR_RCODE_NO_ERROR (0).
// This gets translated to errErrorEinval and ultimately in adjustTokenPrivileges, it gets ignored.
//
// Partial success - If we call this api without admin privileges, privileges related to SACLs do not get set and
// though the api returns success, it returns an error - golang.org/x/sys/windows.ERROR_NOT_ALL_ASSIGNED (1300)
func adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) {
var _p0 uint32
if releaseAll {
_p0 = 1
}
r0, _, e1 := syscall.SyscallN(procAdjustTokenPrivileges.Addr(), uintptr(token), uintptr(_p0), uintptr(unsafe.Pointer(input)), uintptr(outputSize), uintptr(unsafe.Pointer(output)), uintptr(unsafe.Pointer(requiredSize)))
success = r0 != 0
if true {
err = errnoErr(e1)
}
return
}
func lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageID *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
return _lookupPrivilegeDisplayName(_p0, name, buffer, size, languageID)
}
func _lookupPrivilegeDisplayName(systemName *uint16, name *uint16, buffer *uint16, size *uint32, languageID *uint32) (err error) {
r1, _, e1 := syscall.SyscallN(procLookupPrivilegeDisplayNameW.Addr(), uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), uintptr(unsafe.Pointer(languageID)))
if r1 == 0 {
err = errnoErr(e1)
}
return
}
func lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
return _lookupPrivilegeName(_p0, luid, buffer, size)
}
func _lookupPrivilegeName(systemName *uint16, luid *uint64, buffer *uint16, size *uint32) (err error) {
r1, _, e1 := syscall.SyscallN(procLookupPrivilegeNameW.Addr(), uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(luid)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)))
if r1 == 0 {
err = errnoErr(e1)
}
return
}
func lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
var _p1 *uint16
_p1, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _lookupPrivilegeValue(_p0, _p1, luid)
}
func _lookupPrivilegeValue(systemName *uint16, name *uint16, luid *uint64) (err error) {
r1, _, e1 := syscall.SyscallN(procLookupPrivilegeValueW.Addr(), uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(luid)))
if r1 == 0 {
err = errnoErr(e1)
}
return
}
// The code below was copied from https://github.com/microsoft/go-winio/blob/main/tools/mkwinsyscall/mkwinsyscall.go
// errnoErr returns common boxed Errno values, to prevent
// allocations at runtime.
func errnoErr(e syscall.Errno) error {
switch e {
case 0:
return errErrorEinval
case errnoErrorIOPending:
return errErrorIOPending
}
return e
}

View File

@ -0,0 +1,60 @@
//go:build windows
// +build windows
package fs
import (
"encoding/base64"
"os"
"path/filepath"
"testing"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/test"
)
func TestSetGetFileSecurityDescriptors(t *testing.T) {
tempDir := t.TempDir()
testfilePath := filepath.Join(tempDir, "testfile.txt")
// create temp file
testfile, err := os.Create(testfilePath)
if err != nil {
t.Fatalf("failed to create temporary file: %s", err)
}
defer func() {
err := testfile.Close()
if err != nil {
t.Logf("Error closing file %s: %v\n", testfilePath, err)
}
}()
testSecurityDescriptors(t, TestFileSDs, testfilePath)
}
func TestSetGetFolderSecurityDescriptors(t *testing.T) {
tempDir := t.TempDir()
testfolderPath := filepath.Join(tempDir, "testfolder")
// create temp folder
err := os.Mkdir(testfolderPath, os.ModeDir)
if err != nil {
t.Fatalf("failed to create temporary file: %s", err)
}
testSecurityDescriptors(t, TestDirSDs, testfolderPath)
}
func testSecurityDescriptors(t *testing.T, testSDs []string, testPath string) {
for _, testSD := range testSDs {
sdInputBytes, err := base64.StdEncoding.DecodeString(testSD)
test.OK(t, errors.Wrapf(err, "Error decoding SD: %s", testPath))
err = SetSecurityDescriptor(testPath, &sdInputBytes)
test.OK(t, errors.Wrapf(err, "Error setting file security descriptor for: %s", testPath))
var sdOutputBytes *[]byte
sdOutputBytes, err = GetSecurityDescriptor(testPath)
test.OK(t, errors.Wrapf(err, "Error getting file security descriptor for: %s", testPath))
CompareSecurityDescriptors(t, testPath, sdInputBytes, *sdOutputBytes)
}
}

View File

@ -0,0 +1,126 @@
//go:build windows
// +build windows
package fs
import (
"os/user"
"testing"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/test"
"golang.org/x/sys/windows"
)
var (
TestFileSDs = []string{"AQAUvBQAAAAwAAAAAAAAAEwAAAABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvqAwAAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSarAQIAAAIAfAAEAAAAAAAkAKkAEgABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvtAwAAABAUAP8BHwABAQAAAAAABRIAAAAAEBgA/wEfAAECAAAAAAAFIAAAACACAAAAECQA/wEfAAEFAAAAAAAFFQAAAIifWK5WoILqzL0mq+oDAAA=",
"AQAUvBQAAAAwAAAAAAAAAEwAAAABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvqAwAAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSarAQIAAAIAyAAHAAAAAAAUAKkAEgABAQAAAAAABQcAAAAAABQAiQASAAEBAAAAAAAFBwAAAAAAJACpABIAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSar7QMAAAAAJAC/ARMAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSar6gMAAAAAFAD/AR8AAQEAAAAAAAUSAAAAAAAYAP8BHwABAgAAAAAABSAAAAAgAgAAAAAkAP8BHwABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvqAwAA",
"AQAUvBQAAAAwAAAA7AAAAEwAAAABBQAAAAAABRUAAAAvr7t03PyHGk2FokNHCAAAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSarAQIAAAIAoAAFAAAAAAAkAP8BHwABBQAAAAAABRUAAAAvr7t03PyHGk2FokNHCAAAAAAkAKkAEgABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvtAwAAABAUAP8BHwABAQAAAAAABRIAAAAAEBgA/wEfAAECAAAAAAAFIAAAACACAAAAECQA/wEfAAEFAAAAAAAFFQAAAIifWK5WoILqzL0mq+oDAAACAHQAAwAAAAKAJAC/AQIAAQUAAAAAAAUVAAAAL6+7dNz8hxpNhaJDtgQAAALAJAC/AQMAAQUAAAAAAAUVAAAAL6+7dNz8hxpNhaJDPgkAAAJAJAD/AQ8AAQUAAAAAAAUVAAAAL6+7dNz8hxpNhaJDtQQAAA==",
}
TestDirSDs = []string{"AQAUvBQAAAAwAAAAAAAAAEwAAAABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvqAwAAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSarAQIAAAIAfAAEAAAAAAAkAKkAEgABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvtAwAAABMUAP8BHwABAQAAAAAABRIAAAAAExgA/wEfAAECAAAAAAAFIAAAACACAAAAEyQA/wEfAAEFAAAAAAAFFQAAAIifWK5WoILqzL0mq+oDAAA=",
"AQAUvBQAAAAwAAAAAAAAAEwAAAABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvqAwAAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSarAQIAAAIA3AAIAAAAAAIUAKkAEgABAQAAAAAABQcAAAAAAxQAiQASAAEBAAAAAAAFBwAAAAAAJACpABIAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSar7QMAAAAAJAC/ARMAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSar6gMAAAALFAC/ARMAAQEAAAAAAAMAAAAAABMUAP8BHwABAQAAAAAABRIAAAAAExgA/wEfAAECAAAAAAAFIAAAACACAAAAEyQA/wEfAAEFAAAAAAAFFQAAAIifWK5WoILqzL0mq+oDAAA=",
"AQAUvBQAAAAwAAAA7AAAAEwAAAABBQAAAAAABRUAAAAvr7t03PyHGk2FokNHCAAAAQUAAAAAAAUVAAAAiJ9YrlaggurMvSarAQIAAAIAoAAFAAAAAAAkAP8BHwABBQAAAAAABRUAAAAvr7t03PyHGk2FokNHCAAAAAAkAKkAEgABBQAAAAAABRUAAACIn1iuVqCC6sy9JqvtAwAAABMUAP8BHwABAQAAAAAABRIAAAAAExgA/wEfAAECAAAAAAAFIAAAACACAAAAEyQA/wEfAAEFAAAAAAAFFQAAAIifWK5WoILqzL0mq+oDAAACAHQAAwAAAAKAJAC/AQIAAQUAAAAAAAUVAAAAL6+7dNz8hxpNhaJDtgQAAALAJAC/AQMAAQUAAAAAAAUVAAAAL6+7dNz8hxpNhaJDPgkAAAJAJAD/AQ8AAQUAAAAAAAUVAAAAL6+7dNz8hxpNhaJDtQQAAA==",
}
)
// IsAdmin checks if current user is an administrator.
func IsAdmin() (isAdmin bool, err error) {
var sid *windows.SID
err = windows.AllocateAndInitializeSid(&windows.SECURITY_NT_AUTHORITY, 2, windows.SECURITY_BUILTIN_DOMAIN_RID, windows.DOMAIN_ALIAS_RID_ADMINS,
0, 0, 0, 0, 0, 0, &sid)
if err != nil {
return false, errors.Errorf("sid error: %s", err)
}
windows.GetCurrentProcessToken()
token := windows.Token(0)
member, err := token.IsMember(sid)
if err != nil {
return false, errors.Errorf("token membership error: %s", err)
}
return member, nil
}
// CompareSecurityDescriptors runs tests for comparing 2 security descriptors in []byte format.
func CompareSecurityDescriptors(t *testing.T, testPath string, sdInputBytes, sdOutputBytes []byte) {
sdInput, err := SecurityDescriptorBytesToStruct(sdInputBytes)
test.OK(t, errors.Wrapf(err, "Error converting SD to struct for: %s", testPath))
sdOutput, err := SecurityDescriptorBytesToStruct(sdOutputBytes)
test.OK(t, errors.Wrapf(err, "Error converting SD to struct for: %s", testPath))
isAdmin, err := IsAdmin()
test.OK(t, errors.Wrapf(err, "Error checking if user is admin: %s", testPath))
var ownerExpected *windows.SID
var defaultedOwnerExpected bool
var groupExpected *windows.SID
var defaultedGroupExpected bool
var daclExpected *windows.ACL
var defaultedDaclExpected bool
var saclExpected *windows.ACL
var defaultedSaclExpected bool
// The Dacl is set correctly whether or not application is running as admin.
daclExpected, defaultedDaclExpected, err = sdInput.DACL()
test.OK(t, errors.Wrapf(err, "Error getting input dacl for: %s", testPath))
if isAdmin {
// If application is running as admin, all sd values including owner, group, dacl, sacl are set correctly during restore.
// Hence we will use the input values for comparison with the output values.
ownerExpected, defaultedOwnerExpected, err = sdInput.Owner()
test.OK(t, errors.Wrapf(err, "Error getting input owner for: %s", testPath))
groupExpected, defaultedGroupExpected, err = sdInput.Group()
test.OK(t, errors.Wrapf(err, "Error getting input group for: %s", testPath))
saclExpected, defaultedSaclExpected, err = sdInput.SACL()
test.OK(t, errors.Wrapf(err, "Error getting input sacl for: %s", testPath))
} else {
// If application is not running as admin, owner and group are set as current user's SID/GID during restore and sacl is empty.
// Get the current user
user, err := user.Current()
test.OK(t, errors.Wrapf(err, "Could not get current user for: %s", testPath))
// Get current user's SID
currentUserSID, err := windows.StringToSid(user.Uid)
test.OK(t, errors.Wrapf(err, "Error getting output group for: %s", testPath))
// Get current user's Group SID
currentGroupSID, err := windows.StringToSid(user.Gid)
test.OK(t, errors.Wrapf(err, "Error getting output group for: %s", testPath))
// Set owner and group as current user's SID and GID during restore.
ownerExpected = currentUserSID
defaultedOwnerExpected = false
groupExpected = currentGroupSID
defaultedGroupExpected = false
// If application is not running as admin, SACL is returned empty.
saclExpected = nil
defaultedSaclExpected = false
}
// Now do all the comparisons
// Get owner SID from output file
ownerOut, defaultedOwnerOut, err := sdOutput.Owner()
test.OK(t, errors.Wrapf(err, "Error getting output owner for: %s", testPath))
// Compare owner SIDs. We must use the Equals method for comparison as a syscall is made for comparing SIDs.
test.Assert(t, ownerExpected.Equals(ownerOut), "Owner from SDs read from test path don't match: %s, cur:%s, exp: %s", testPath, ownerExpected.String(), ownerOut.String())
test.Equals(t, defaultedOwnerExpected, defaultedOwnerOut, "Defaulted for owner from SDs read from test path don't match: %s", testPath)
// Get group SID from output file
groupOut, defaultedGroupOut, err := sdOutput.Group()
test.OK(t, errors.Wrapf(err, "Error getting output group for: %s", testPath))
// Compare group SIDs. We must use the Equals method for comparison as a syscall is made for comparing SIDs.
test.Assert(t, groupExpected.Equals(groupOut), "Group from SDs read from test path don't match: %s, cur:%s, exp: %s", testPath, groupExpected.String(), groupOut.String())
test.Equals(t, defaultedGroupExpected, defaultedGroupOut, "Defaulted for group from SDs read from test path don't match: %s", testPath)
// Get dacl from output file
daclOut, defaultedDaclOut, err := sdOutput.DACL()
test.OK(t, errors.Wrapf(err, "Error getting output dacl for: %s", testPath))
// Compare dacls
test.Equals(t, daclExpected, daclOut, "DACL from SDs read from test path don't match: %s", testPath)
test.Equals(t, defaultedDaclExpected, defaultedDaclOut, "Defaulted for DACL from SDs read from test path don't match: %s", testPath)
// Get sacl from output file
saclOut, defaultedSaclOut, err := sdOutput.SACL()
test.OK(t, errors.Wrapf(err, "Error getting output sacl for: %s", testPath))
// Compare sacls
test.Equals(t, saclExpected, saclOut, "DACL from SDs read from test path don't match: %s", testPath)
test.Equals(t, defaultedSaclExpected, defaultedSaclOut, "Defaulted for SACL from SDs read from test path don't match: %s", testPath)
}

View File

@ -4,6 +4,8 @@
package fs
import (
"time"
"github.com/restic/restic/internal/errors"
)
@ -31,10 +33,16 @@ func HasSufficientPrivilegesForVSS() error {
return errors.New("VSS snapshots are only supported on windows")
}
// GetVolumeNameForVolumeMountPoint add trailing backslash to input parameter
// and calls the equivalent windows api.
func GetVolumeNameForVolumeMountPoint(mountPoint string) (string, error) {
return mountPoint, nil
}
// NewVssSnapshot creates a new vss snapshot. If creating the snapshots doesn't
// finish within the timeout an error is returned.
func NewVssSnapshot(
_ string, _ uint, _ ErrorHandler) (VssSnapshot, error) {
func NewVssSnapshot(_ string,
_ string, _ time.Duration, _ VolumeFilter, _ ErrorHandler) (VssSnapshot, error) {
return VssSnapshot{}, errors.New("VSS snapshots are only supported on windows")
}

View File

@ -5,10 +5,12 @@ package fs
import (
"fmt"
"math"
"path/filepath"
"runtime"
"strings"
"syscall"
"time"
"unsafe"
ole "github.com/go-ole/go-ole"
@ -20,8 +22,10 @@ import (
type HRESULT uint
// HRESULT constant values necessary for using VSS api.
//nolint:golint
const (
S_OK HRESULT = 0x00000000
S_FALSE HRESULT = 0x00000001
E_ACCESSDENIED HRESULT = 0x80070005
E_OUTOFMEMORY HRESULT = 0x8007000E
E_INVALIDARG HRESULT = 0x80070057
@ -255,6 +259,7 @@ type IVssBackupComponents struct {
}
// IVssBackupComponentsVTable is the vtable for IVssBackupComponents.
// nolint:structcheck
type IVssBackupComponentsVTable struct {
ole.IUnknownVtbl
getWriterComponentsCount uintptr
@ -364,7 +369,7 @@ func (vss *IVssBackupComponents) convertToVSSAsync(
}
// IsVolumeSupported calls the equivalent VSS api.
func (vss *IVssBackupComponents) IsVolumeSupported(volumeName string) (bool, error) {
func (vss *IVssBackupComponents) IsVolumeSupported(providerID *ole.GUID, volumeName string) (bool, error) {
volumeNamePointer, err := syscall.UTF16PtrFromString(volumeName)
if err != nil {
panic(err)
@ -374,7 +379,7 @@ func (vss *IVssBackupComponents) IsVolumeSupported(volumeName string) (bool, err
var result uintptr
if runtime.GOARCH == "386" {
id := (*[4]uintptr)(unsafe.Pointer(ole.IID_NULL))
id := (*[4]uintptr)(unsafe.Pointer(providerID))
result, _, _ = syscall.Syscall9(vss.getVTable().isVolumeSupported, 7,
uintptr(unsafe.Pointer(vss)), id[0], id[1], id[2], id[3],
@ -382,7 +387,7 @@ func (vss *IVssBackupComponents) IsVolumeSupported(volumeName string) (bool, err
0)
} else {
result, _, _ = syscall.Syscall6(vss.getVTable().isVolumeSupported, 4,
uintptr(unsafe.Pointer(vss)), uintptr(unsafe.Pointer(ole.IID_NULL)),
uintptr(unsafe.Pointer(vss)), uintptr(unsafe.Pointer(providerID)),
uintptr(unsafe.Pointer(volumeNamePointer)), uintptr(unsafe.Pointer(&isSupportedRaw)), 0,
0)
}
@ -408,24 +413,24 @@ func (vss *IVssBackupComponents) StartSnapshotSet() (ole.GUID, error) {
}
// AddToSnapshotSet calls the equivalent VSS api.
func (vss *IVssBackupComponents) AddToSnapshotSet(volumeName string, idSnapshot *ole.GUID) error {
func (vss *IVssBackupComponents) AddToSnapshotSet(volumeName string, providerID *ole.GUID, idSnapshot *ole.GUID) error {
volumeNamePointer, err := syscall.UTF16PtrFromString(volumeName)
if err != nil {
panic(err)
}
var result uintptr = 0
var result uintptr
if runtime.GOARCH == "386" {
id := (*[4]uintptr)(unsafe.Pointer(ole.IID_NULL))
id := (*[4]uintptr)(unsafe.Pointer(providerID))
result, _, _ = syscall.Syscall9(vss.getVTable().addToSnapshotSet, 7,
uintptr(unsafe.Pointer(vss)), uintptr(unsafe.Pointer(volumeNamePointer)), id[0], id[1],
id[2], id[3], uintptr(unsafe.Pointer(idSnapshot)), 0, 0)
uintptr(unsafe.Pointer(vss)), uintptr(unsafe.Pointer(volumeNamePointer)),
id[0], id[1], id[2], id[3], uintptr(unsafe.Pointer(idSnapshot)), 0, 0)
} else {
result, _, _ = syscall.Syscall6(vss.getVTable().addToSnapshotSet, 4,
uintptr(unsafe.Pointer(vss)), uintptr(unsafe.Pointer(volumeNamePointer)),
uintptr(unsafe.Pointer(ole.IID_NULL)), uintptr(unsafe.Pointer(idSnapshot)), 0, 0)
uintptr(unsafe.Pointer(providerID)), uintptr(unsafe.Pointer(idSnapshot)), 0, 0)
}
return newVssErrorIfResultNotOK("AddToSnapshotSet() failed", HRESULT(result))
@ -478,9 +483,9 @@ func (vss *IVssBackupComponents) DoSnapshotSet() (*IVSSAsync, error) {
// DeleteSnapshots calls the equivalent VSS api.
func (vss *IVssBackupComponents) DeleteSnapshots(snapshotID ole.GUID) (int32, ole.GUID, error) {
var deletedSnapshots int32 = 0
var deletedSnapshots int32
var nondeletedSnapshotID ole.GUID
var result uintptr = 0
var result uintptr
if runtime.GOARCH == "386" {
id := (*[4]uintptr)(unsafe.Pointer(&snapshotID))
@ -504,7 +509,7 @@ func (vss *IVssBackupComponents) DeleteSnapshots(snapshotID ole.GUID) (int32, ol
// GetSnapshotProperties calls the equivalent VSS api.
func (vss *IVssBackupComponents) GetSnapshotProperties(snapshotID ole.GUID,
properties *VssSnapshotProperties) error {
var result uintptr = 0
var result uintptr
if runtime.GOARCH == "386" {
id := (*[4]uintptr)(unsafe.Pointer(&snapshotID))
@ -527,8 +532,8 @@ func vssFreeSnapshotProperties(properties *VssSnapshotProperties) error {
if err != nil {
return err
}
proc.Call(uintptr(unsafe.Pointer(properties)))
// this function always succeeds and returns no value
_, _, _ = proc.Call(uintptr(unsafe.Pointer(properties)))
return nil
}
@ -543,6 +548,7 @@ func (vss *IVssBackupComponents) BackupComplete() (*IVSSAsync, error) {
}
// VssSnapshotProperties defines the properties of a VSS snapshot as part of the VSS api.
// nolint:structcheck
type VssSnapshotProperties struct {
snapshotID ole.GUID
snapshotSetID ole.GUID
@ -559,6 +565,24 @@ type VssSnapshotProperties struct {
status uint
}
// VssProviderProperties defines the properties of a VSS provider as part of the VSS api.
// nolint:structcheck
type VssProviderProperties struct {
providerID ole.GUID
providerName *uint16
providerType uint32
providerVersion *uint16
providerVersionID ole.GUID
classID ole.GUID
}
func vssFreeProviderProperties(p *VssProviderProperties) {
ole.CoTaskMemFree(uintptr(unsafe.Pointer(p.providerName)))
p.providerName = nil
ole.CoTaskMemFree(uintptr(unsafe.Pointer(p.providerVersion)))
p.providerVersion = nil
}
// GetSnapshotDeviceObject returns root path to access the snapshot files
// and folders.
func (p *VssSnapshotProperties) GetSnapshotDeviceObject() string {
@ -617,8 +641,13 @@ func (vssAsync *IVSSAsync) QueryStatus() (HRESULT, uint32) {
// WaitUntilAsyncFinished waits until either the async call is finished or
// the given timeout is reached.
func (vssAsync *IVSSAsync) WaitUntilAsyncFinished(millis uint32) error {
hresult := vssAsync.Wait(millis)
func (vssAsync *IVSSAsync) WaitUntilAsyncFinished(timeout time.Duration) error {
const maxTimeout = math.MaxInt32 * time.Millisecond
if timeout > maxTimeout {
timeout = maxTimeout
}
hresult := vssAsync.Wait(uint32(timeout.Milliseconds()))
err := newVssErrorIfResultNotOK("Wait() failed", hresult)
if err != nil {
vssAsync.Cancel()
@ -651,6 +680,75 @@ func (vssAsync *IVSSAsync) WaitUntilAsyncFinished(millis uint32) error {
return nil
}
// UIID_IVSS_ADMIN defines the GUID of IVSSAdmin.
var (
UIID_IVSS_ADMIN = ole.NewGUID("{77ED5996-2F63-11d3-8A39-00C04F72D8E3}")
CLSID_VSS_COORDINATOR = ole.NewGUID("{E579AB5F-1CC4-44b4-BED9-DE0991FF0623}")
)
// IVSSAdmin VSS api interface.
type IVSSAdmin struct {
ole.IUnknown
}
// IVSSAdminVTable is the vtable for IVSSAdmin.
// nolint:structcheck
type IVSSAdminVTable struct {
ole.IUnknownVtbl
registerProvider uintptr
unregisterProvider uintptr
queryProviders uintptr
abortAllSnapshotsInProgress uintptr
}
// getVTable returns the vtable for IVSSAdmin.
func (vssAdmin *IVSSAdmin) getVTable() *IVSSAdminVTable {
return (*IVSSAdminVTable)(unsafe.Pointer(vssAdmin.RawVTable))
}
// QueryProviders calls the equivalent VSS api.
func (vssAdmin *IVSSAdmin) QueryProviders() (*IVssEnumObject, error) {
var enum *IVssEnumObject
result, _, _ := syscall.Syscall(vssAdmin.getVTable().queryProviders, 2,
uintptr(unsafe.Pointer(vssAdmin)), uintptr(unsafe.Pointer(&enum)), 0)
return enum, newVssErrorIfResultNotOK("QueryProviders() failed", HRESULT(result))
}
// IVssEnumObject VSS api interface.
type IVssEnumObject struct {
ole.IUnknown
}
// IVssEnumObjectVTable is the vtable for IVssEnumObject.
// nolint:structcheck
type IVssEnumObjectVTable struct {
ole.IUnknownVtbl
next uintptr
skip uintptr
reset uintptr
clone uintptr
}
// getVTable returns the vtable for IVssEnumObject.
func (vssEnum *IVssEnumObject) getVTable() *IVssEnumObjectVTable {
return (*IVssEnumObjectVTable)(unsafe.Pointer(vssEnum.RawVTable))
}
// Next calls the equivalent VSS api.
func (vssEnum *IVssEnumObject) Next(count uint, props unsafe.Pointer) (uint, error) {
var fetched uint32
result, _, _ := syscall.Syscall6(vssEnum.getVTable().next, 4,
uintptr(unsafe.Pointer(vssEnum)), uintptr(count), uintptr(props),
uintptr(unsafe.Pointer(&fetched)), 0, 0)
if HRESULT(result) == S_FALSE {
return uint(fetched), nil
}
return uint(fetched), newVssErrorIfResultNotOK("Next() failed", HRESULT(result))
}
// MountPoint wraps all information of a snapshot of a mountpoint on a volume.
type MountPoint struct {
isSnapshotted bool
@ -677,7 +775,7 @@ type VssSnapshot struct {
snapshotProperties VssSnapshotProperties
snapshotDeviceObject string
mountPointInfo map[string]MountPoint
timeoutInMillis uint32
timeout time.Duration
}
// GetSnapshotDeviceObject returns root path to access the snapshot files
@ -694,7 +792,12 @@ func initializeVssCOMInterface() (*ole.IUnknown, error) {
}
// ensure COM is initialized before use
ole.CoInitializeEx(0, ole.COINIT_MULTITHREADED)
if err = ole.CoInitializeEx(0, ole.COINIT_MULTITHREADED); err != nil {
// CoInitializeEx returns S_FALSE if COM is already initialized
if oleErr, ok := err.(*ole.OleError); !ok || HRESULT(oleErr.Code()) != S_FALSE {
return nil, err
}
}
var oleIUnknown *ole.IUnknown
result, _, _ := vssInstance.Call(uintptr(unsafe.Pointer(&oleIUnknown)))
@ -727,12 +830,34 @@ func HasSufficientPrivilegesForVSS() error {
return err
}
// GetVolumeNameForVolumeMountPoint add trailing backslash to input parameter
// and calls the equivalent windows api.
func GetVolumeNameForVolumeMountPoint(mountPoint string) (string, error) {
if mountPoint != "" && mountPoint[len(mountPoint)-1] != filepath.Separator {
mountPoint += string(filepath.Separator)
}
mountPointPointer, err := syscall.UTF16PtrFromString(mountPoint)
if err != nil {
return mountPoint, err
}
// A reasonable size for the buffer to accommodate the largest possible
// volume GUID path is 50 characters.
volumeNameBuffer := make([]uint16, 50)
if err := windows.GetVolumeNameForVolumeMountPoint(
mountPointPointer, &volumeNameBuffer[0], 50); err != nil {
return mountPoint, err
}
return syscall.UTF16ToString(volumeNameBuffer), nil
}
// NewVssSnapshot creates a new vss snapshot. If creating the snapshots doesn't
// finish within the timeout an error is returned.
func NewVssSnapshot(
volume string, timeoutInSeconds uint, msgError ErrorHandler) (VssSnapshot, error) {
func NewVssSnapshot(provider string,
volume string, timeout time.Duration, filter VolumeFilter, msgError ErrorHandler) (VssSnapshot, error) {
is64Bit, err := isRunningOn64BitWindows()
if err != nil {
return VssSnapshot{}, newVssTextError(fmt.Sprintf(
"Failed to detect windows architecture: %s", err.Error()))
@ -744,7 +869,7 @@ func NewVssSnapshot(
runtime.GOARCH))
}
timeoutInMillis := uint32(timeoutInSeconds * 1000)
deadline := time.Now().Add(timeout)
oleIUnknown, err := initializeVssCOMInterface()
if oleIUnknown != nil {
@ -778,6 +903,12 @@ func NewVssSnapshot(
iVssBackupComponents := (*IVssBackupComponents)(unsafe.Pointer(comInterface))
providerID, err := getProviderID(provider)
if err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
}
if err := iVssBackupComponents.InitializeForBackup(); err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
@ -796,13 +927,13 @@ func NewVssSnapshot(
}
err = callAsyncFunctionAndWait(iVssBackupComponents.GatherWriterMetadata,
"GatherWriterMetadata", timeoutInMillis)
"GatherWriterMetadata", deadline)
if err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
}
if isSupported, err := iVssBackupComponents.IsVolumeSupported(volume); err != nil {
if isSupported, err := iVssBackupComponents.IsVolumeSupported(providerID, volume); err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
} else if !isSupported {
@ -817,44 +948,53 @@ func NewVssSnapshot(
return VssSnapshot{}, err
}
if err := iVssBackupComponents.AddToSnapshotSet(volume, &snapshotSetID); err != nil {
if err := iVssBackupComponents.AddToSnapshotSet(volume, providerID, &snapshotSetID); err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
}
mountPoints, err := enumerateMountedFolders(volume)
if err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, newVssTextError(fmt.Sprintf(
"failed to enumerate mount points for volume %s: %s", volume, err))
}
mountPointInfo := make(map[string]MountPoint)
for _, mountPoint := range mountPoints {
// ensure every mountpoint is available even without a valid
// snapshot because we need to consider this when backing up files
mountPointInfo[mountPoint] = MountPoint{isSnapshotted: false}
if isSupported, err := iVssBackupComponents.IsVolumeSupported(mountPoint); err != nil {
continue
} else if !isSupported {
continue
}
var mountPointSnapshotSetID ole.GUID
err := iVssBackupComponents.AddToSnapshotSet(mountPoint, &mountPointSnapshotSetID)
// if filter==nil just don't process mount points for this volume at all
if filter != nil {
mountPoints, err := enumerateMountedFolders(volume)
if err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
return VssSnapshot{}, newVssTextError(fmt.Sprintf(
"failed to enumerate mount points for volume %s: %s", volume, err))
}
mountPointInfo[mountPoint] = MountPoint{isSnapshotted: true,
snapshotSetID: mountPointSnapshotSetID}
for _, mountPoint := range mountPoints {
// ensure every mountpoint is available even without a valid
// snapshot because we need to consider this when backing up files
mountPointInfo[mountPoint] = MountPoint{isSnapshotted: false}
if !filter(mountPoint) {
continue
} else if isSupported, err := iVssBackupComponents.IsVolumeSupported(providerID, mountPoint); err != nil {
continue
} else if !isSupported {
continue
}
var mountPointSnapshotSetID ole.GUID
err := iVssBackupComponents.AddToSnapshotSet(mountPoint, providerID, &mountPointSnapshotSetID)
if err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
}
mountPointInfo[mountPoint] = MountPoint{
isSnapshotted: true,
snapshotSetID: mountPointSnapshotSetID,
}
}
}
err = callAsyncFunctionAndWait(iVssBackupComponents.PrepareForBackup, "PrepareForBackup",
timeoutInMillis)
deadline)
if err != nil {
// After calling PrepareForBackup one needs to call AbortBackup() before releasing the VSS
// instance for proper cleanup.
@ -865,9 +1005,9 @@ func NewVssSnapshot(
}
err = callAsyncFunctionAndWait(iVssBackupComponents.DoSnapshotSet, "DoSnapshotSet",
timeoutInMillis)
deadline)
if err != nil {
iVssBackupComponents.AbortBackup()
_ = iVssBackupComponents.AbortBackup()
iVssBackupComponents.Release()
return VssSnapshot{}, err
}
@ -875,13 +1015,12 @@ func NewVssSnapshot(
var snapshotProperties VssSnapshotProperties
err = iVssBackupComponents.GetSnapshotProperties(snapshotSetID, &snapshotProperties)
if err != nil {
iVssBackupComponents.AbortBackup()
_ = iVssBackupComponents.AbortBackup()
iVssBackupComponents.Release()
return VssSnapshot{}, err
}
for mountPoint, info := range mountPointInfo {
if !info.isSnapshotted {
continue
}
@ -900,8 +1039,10 @@ func NewVssSnapshot(
mountPointInfo[mountPoint] = info
}
return VssSnapshot{iVssBackupComponents, snapshotSetID, snapshotProperties,
snapshotProperties.GetSnapshotDeviceObject(), mountPointInfo, timeoutInMillis}, nil
return VssSnapshot{
iVssBackupComponents, snapshotSetID, snapshotProperties,
snapshotProperties.GetSnapshotDeviceObject(), mountPointInfo, time.Until(deadline),
}, nil
}
// Delete deletes the created snapshot.
@ -922,15 +1063,17 @@ func (p *VssSnapshot) Delete() error {
if p.iVssBackupComponents != nil {
defer p.iVssBackupComponents.Release()
deadline := time.Now().Add(p.timeout)
err = callAsyncFunctionAndWait(p.iVssBackupComponents.BackupComplete, "BackupComplete",
p.timeoutInMillis)
deadline)
if err != nil {
return err
}
if _, _, e := p.iVssBackupComponents.DeleteSnapshots(p.snapshotID); e != nil {
err = newVssTextError(fmt.Sprintf("Failed to delete snapshot: %s", e.Error()))
p.iVssBackupComponents.AbortBackup()
_ = p.iVssBackupComponents.AbortBackup()
if err != nil {
return err
}
@ -940,12 +1083,61 @@ func (p *VssSnapshot) Delete() error {
return nil
}
func getProviderID(provider string) (*ole.GUID, error) {
providerLower := strings.ToLower(provider)
switch providerLower {
case "":
return ole.IID_NULL, nil
case "ms":
return ole.NewGUID("{b5946137-7b9f-4925-af80-51abd60b20d5}"), nil
}
comInterface, err := ole.CreateInstance(CLSID_VSS_COORDINATOR, UIID_IVSS_ADMIN)
if err != nil {
return nil, err
}
defer comInterface.Release()
vssAdmin := (*IVSSAdmin)(unsafe.Pointer(comInterface))
enum, err := vssAdmin.QueryProviders()
if err != nil {
return nil, err
}
defer enum.Release()
id := ole.NewGUID(provider)
var props struct {
objectType uint32
provider VssProviderProperties
}
for {
count, err := enum.Next(1, unsafe.Pointer(&props))
if err != nil {
return nil, err
}
if count < 1 {
return nil, errors.Errorf(`invalid VSS provider "%s"`, provider)
}
name := ole.UTF16PtrToString(props.provider.providerName)
vssFreeProviderProperties(&props.provider)
if id != nil && *id == props.provider.providerID ||
id == nil && providerLower == strings.ToLower(name) {
return &props.provider.providerID, nil
}
}
}
// asyncCallFunc is the callback type for callAsyncFunctionAndWait.
type asyncCallFunc func() (*IVSSAsync, error)
// callAsyncFunctionAndWait calls an async functions and waits for it to either
// finish or timeout.
func callAsyncFunctionAndWait(function asyncCallFunc, name string, timeoutInMillis uint32) error {
func callAsyncFunctionAndWait(function asyncCallFunc, name string, deadline time.Time) error {
iVssAsync, err := function()
if err != nil {
return err
@ -955,7 +1147,12 @@ func callAsyncFunctionAndWait(function asyncCallFunc, name string, timeoutInMill
return newVssTextError(fmt.Sprintf("%s() returned nil", name))
}
err = iVssAsync.WaitUntilAsyncFinished(timeoutInMillis)
timeout := time.Until(deadline)
if timeout <= 0 {
return newVssTextError(fmt.Sprintf("%s() deadline exceeded", name))
}
err = iVssAsync.WaitUntilAsyncFinished(timeout)
iVssAsync.Release()
return err
}
@ -1036,6 +1233,7 @@ func enumerateMountedFolders(volume string) ([]string, error) {
return mountedFolders, nil
}
// nolint:errcheck
defer windows.FindVolumeMountPointClose(handle)
volumeMountPoint := syscall.UTF16ToString(volumeMountPointBuffer)

View File

@ -96,20 +96,14 @@ func (f *file) Open(_ context.Context, _ *fuse.OpenRequest, _ *fuse.OpenResponse
}
func (f *openFile) getBlobAt(ctx context.Context, i int) (blob []byte, err error) {
blob, ok := f.root.blobCache.Get(f.node.Content[i])
if ok {
return blob, nil
}
blob, err = f.root.repo.LoadBlob(ctx, restic.DataBlob, f.node.Content[i], nil)
blob, err = f.root.blobCache.GetOrCompute(f.node.Content[i], func() ([]byte, error) {
return f.root.repo.LoadBlob(ctx, restic.DataBlob, f.node.Content[i], nil)
})
if err != nil {
debug.Log("LoadBlob(%v, %v) failed: %v", f.node.Name, f.node.Content[i], err)
return nil, unwrapCtxCanceled(err)
}
f.root.blobCache.Add(f.node.Content[i], blob)
return blob, nil
}

View File

@ -15,7 +15,7 @@ import (
var repoFixture = filepath.Join("..", "repository", "testdata", "test-repo.tar.gz")
func TestRepositoryForAllIndexes(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, repoFixture)
repo, _, cleanup := repository.TestFromFixture(t, repoFixture)
defer cleanup()
expectedIndexIDs := restic.NewIDSet()

View File

@ -204,7 +204,7 @@ func (h *hashedArrayTree) Size() uint {
func (h *hashedArrayTree) grow() {
idx, subIdx := h.index(h.size)
if int(idx) == len(h.blockList) {
// blockList is too small -> double list and block size
// blockList is too short -> double list and block size
h.blockSize *= 2
h.mask = h.mask*2 + 1
h.maskShift++

View File

@ -270,7 +270,7 @@ func (mi *MasterIndex) MergeFinalIndexes() error {
// Save saves all known indexes to index files, leaving out any
// packs whose ID is contained in packBlacklist from finalized indexes.
// It also removes the old index files and those listed in extraObsolete.
func (mi *MasterIndex) Save(ctx context.Context, repo restic.Repository, excludePacks restic.IDSet, extraObsolete restic.IDs, opts restic.MasterIndexSaveOpts) error {
func (mi *MasterIndex) Save(ctx context.Context, repo restic.SaverRemoverUnpacked, excludePacks restic.IDSet, extraObsolete restic.IDs, opts restic.MasterIndexSaveOpts) error {
p := opts.SaveProgress
p.SetMax(uint64(len(mi.Packs(excludePacks))))

View File

@ -342,7 +342,7 @@ var (
)
func createFilledRepo(t testing.TB, snapshots int, version uint) restic.Repository {
repo := repository.TestRepositoryWithVersion(t, version)
repo, _ := repository.TestRepositoryWithVersion(t, version)
for i := 0; i < snapshots; i++ {
restic.TestCreateSnapshot(t, repo, snapshotTime.Add(time.Duration(i)*time.Second), depth)

View File

@ -11,6 +11,7 @@ import (
"github.com/restic/restic/internal/backend/s3"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
)
@ -24,7 +25,7 @@ type S3Layout struct{}
// Check tests whether the migration can be applied.
func (m *S3Layout) Check(_ context.Context, repo restic.Repository) (bool, string, error) {
be := backend.AsBackend[*s3.Backend](repo.Backend())
be := repository.AsS3Backend(repo.(*repository.Repository))
if be == nil {
debug.Log("backend is not s3")
return false, "backend is not s3", nil
@ -76,7 +77,7 @@ func (m *S3Layout) moveFiles(ctx context.Context, be *s3.Backend, l layout.Layou
// Apply runs the migration.
func (m *S3Layout) Apply(ctx context.Context, repo restic.Repository) error {
be := backend.AsBackend[*s3.Backend](repo.Backend())
be := repository.AsS3Backend(repo.(*repository.Repository))
if be == nil {
debug.Log("backend is not s3")
return errors.New("backend is not s3")

View File

@ -3,11 +3,8 @@ package migrations
import (
"context"
"fmt"
"io"
"os"
"path/filepath"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
)
@ -15,26 +12,6 @@ func init() {
register(&UpgradeRepoV2{})
}
type UpgradeRepoV2Error struct {
UploadNewConfigError error
ReuploadOldConfigError error
BackupFilePath string
}
func (err *UpgradeRepoV2Error) Error() string {
if err.ReuploadOldConfigError != nil {
return fmt.Sprintf("error uploading config (%v), re-uploading old config filed failed as well (%v), but there is a backup of the config file in %v", err.UploadNewConfigError, err.ReuploadOldConfigError, err.BackupFilePath)
}
return fmt.Sprintf("error uploading config (%v), re-uploaded old config was successful, there is a backup of the config file in %v", err.UploadNewConfigError, err.BackupFilePath)
}
func (err *UpgradeRepoV2Error) Unwrap() error {
// consider the original upload error as the primary cause
return err.UploadNewConfigError
}
type UpgradeRepoV2 struct{}
func (*UpgradeRepoV2) Name() string {
@ -57,74 +34,7 @@ func (*UpgradeRepoV2) Check(_ context.Context, repo restic.Repository) (bool, st
func (*UpgradeRepoV2) RepoCheck() bool {
return true
}
func (*UpgradeRepoV2) upgrade(ctx context.Context, repo restic.Repository) error {
h := backend.Handle{Type: backend.ConfigFile}
if !repo.Backend().HasAtomicReplace() {
// remove the original file for backends which do not support atomic overwriting
err := repo.Backend().Remove(ctx, h)
if err != nil {
return fmt.Errorf("remove config failed: %w", err)
}
}
// upgrade config
cfg := repo.Config()
cfg.Version = 2
err := restic.SaveConfig(ctx, repo, cfg)
if err != nil {
return fmt.Errorf("save new config file failed: %w", err)
}
return nil
}
func (m *UpgradeRepoV2) Apply(ctx context.Context, repo restic.Repository) error {
tempdir, err := os.MkdirTemp("", "restic-migrate-upgrade-repo-v2-")
if err != nil {
return fmt.Errorf("create temp dir failed: %w", err)
}
h := backend.Handle{Type: restic.ConfigFile}
// read raw config file and save it to a temp dir, just in case
var rawConfigFile []byte
err = repo.Backend().Load(ctx, h, 0, 0, func(rd io.Reader) (err error) {
rawConfigFile, err = io.ReadAll(rd)
return err
})
if err != nil {
return fmt.Errorf("load config file failed: %w", err)
}
backupFileName := filepath.Join(tempdir, "config")
err = os.WriteFile(backupFileName, rawConfigFile, 0600)
if err != nil {
return fmt.Errorf("write config file backup to %v failed: %w", tempdir, err)
}
// run the upgrade
err = m.upgrade(ctx, repo)
if err != nil {
// build an error we can return to the caller
repoError := &UpgradeRepoV2Error{
UploadNewConfigError: err,
BackupFilePath: backupFileName,
}
// try contingency methods, reupload the original file
_ = repo.Backend().Remove(ctx, h)
err = repo.Backend().Save(ctx, h, backend.NewByteReader(rawConfigFile, nil))
if err != nil {
repoError.ReuploadOldConfigError = err
}
return repoError
}
_ = os.Remove(backupFileName)
_ = os.Remove(tempdir)
return nil
return repository.UpgradeRepo(ctx, repo.(*repository.Repository))
}

View File

@ -2,19 +2,13 @@ package migrations
import (
"context"
"os"
"path/filepath"
"sync"
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/test"
)
func TestUpgradeRepoV2(t *testing.T) {
repo := repository.TestRepositoryWithVersion(t, 1)
repo, _ := repository.TestRepositoryWithVersion(t, 1)
if repo.Config().Version != 1 {
t.Fatal("test repo has wrong version")
}
@ -35,73 +29,3 @@ func TestUpgradeRepoV2(t *testing.T) {
t.Fatal(err)
}
}
type failBackend struct {
backend.Backend
mu sync.Mutex
ConfigFileSavesUntilError uint
}
func (be *failBackend) Save(ctx context.Context, h backend.Handle, rd backend.RewindReader) error {
if h.Type != backend.ConfigFile {
return be.Backend.Save(ctx, h, rd)
}
be.mu.Lock()
if be.ConfigFileSavesUntilError == 0 {
be.mu.Unlock()
return errors.New("failure induced for testing")
}
be.ConfigFileSavesUntilError--
be.mu.Unlock()
return be.Backend.Save(ctx, h, rd)
}
func TestUpgradeRepoV2Failure(t *testing.T) {
be := repository.TestBackend(t)
// wrap backend so that it fails upgrading the config after the initial write
be = &failBackend{
ConfigFileSavesUntilError: 1,
Backend: be,
}
repo := repository.TestRepositoryWithBackend(t, be, 1, repository.Options{})
if repo.Config().Version != 1 {
t.Fatal("test repo has wrong version")
}
m := &UpgradeRepoV2{}
ok, _, err := m.Check(context.Background(), repo)
if err != nil {
t.Fatal(err)
}
if !ok {
t.Fatal("migration check returned false")
}
err = m.Apply(context.Background(), repo)
if err == nil {
t.Fatal("expected error returned from Apply(), got nil")
}
upgradeErr := err.(*UpgradeRepoV2Error)
if upgradeErr.UploadNewConfigError == nil {
t.Fatal("expected upload error, got nil")
}
if upgradeErr.ReuploadOldConfigError == nil {
t.Fatal("expected reupload error, got nil")
}
if upgradeErr.BackupFilePath == "" {
t.Fatal("no backup file path found")
}
test.OK(t, os.Remove(upgradeErr.BackupFilePath))
test.OK(t, os.Remove(filepath.Dir(upgradeErr.BackupFilePath)))
}

View File

@ -239,7 +239,7 @@ func readRecords(rd io.ReaderAt, size int64, bufsize int) ([]byte, int, error) {
case hlen == 0:
err = InvalidFileError{Message: "header length is zero"}
case hlen < crypto.Extension:
err = InvalidFileError{Message: "header length is too small"}
err = InvalidFileError{Message: "header length is too short"}
case int64(hlen) > size-int64(headerLengthSize):
err = InvalidFileError{Message: "header is larger than file"}
case int64(hlen) > MaxHeaderSize-int64(headerLengthSize):
@ -263,7 +263,7 @@ func readRecords(rd io.ReaderAt, size int64, bufsize int) ([]byte, int, error) {
func readHeader(rd io.ReaderAt, size int64) ([]byte, error) {
debug.Log("size: %v", size)
if size < int64(minFileSize) {
err := InvalidFileError{Message: "file is too small"}
err := InvalidFileError{Message: "file is too short"}
return nil, errors.Wrap(err, "readHeader")
}
@ -305,7 +305,7 @@ func List(k *crypto.Key, rd io.ReaderAt, size int64) (entries []restic.Blob, hdr
}
if len(buf) < crypto.CiphertextLength(0) {
return nil, 0, errors.New("invalid header, too small")
return nil, 0, errors.New("invalid header, too short")
}
hdrSize = headerLengthSize + uint32(len(buf))

View File

@ -0,0 +1,210 @@
package repository
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"sort"
"github.com/klauspost/compress/zstd"
"github.com/minio/sha256-simd"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/hashing"
"github.com/restic/restic/internal/pack"
"github.com/restic/restic/internal/restic"
)
// ErrPackData is returned if errors are discovered while verifying a packfile
type ErrPackData struct {
PackID restic.ID
errs []error
}
func (e *ErrPackData) Error() string {
return fmt.Sprintf("pack %v contains %v errors: %v", e.PackID, len(e.errs), e.errs)
}
type partialReadError struct {
err error
}
func (e *partialReadError) Error() string {
return e.err.Error()
}
// CheckPack reads a pack and checks the integrity of all blobs.
func CheckPack(ctx context.Context, r *Repository, id restic.ID, blobs []restic.Blob, size int64, bufRd *bufio.Reader, dec *zstd.Decoder) error {
err := checkPackInner(ctx, r, id, blobs, size, bufRd, dec)
if err != nil {
if r.Cache != nil {
// ignore error as there's not much we can do here
_ = r.Cache.Forget(backend.Handle{Type: restic.PackFile, Name: id.String()})
}
// retry pack verification to detect transient errors
err2 := checkPackInner(ctx, r, id, blobs, size, bufRd, dec)
if err2 != nil {
err = err2
} else {
err = fmt.Errorf("check successful on second attempt, original error %w", err)
}
}
return err
}
func checkPackInner(ctx context.Context, r *Repository, id restic.ID, blobs []restic.Blob, size int64, bufRd *bufio.Reader, dec *zstd.Decoder) error {
debug.Log("checking pack %v", id.String())
if len(blobs) == 0 {
return &ErrPackData{PackID: id, errs: []error{errors.New("pack is empty or not indexed")}}
}
// sanity check blobs in index
sort.Slice(blobs, func(i, j int) bool {
return blobs[i].Offset < blobs[j].Offset
})
idxHdrSize := pack.CalculateHeaderSize(blobs)
lastBlobEnd := 0
nonContinuousPack := false
for _, blob := range blobs {
if lastBlobEnd != int(blob.Offset) {
nonContinuousPack = true
}
lastBlobEnd = int(blob.Offset + blob.Length)
}
// size was calculated by masterindex.PackSize, thus there's no need to recalculate it here
var errs []error
if nonContinuousPack {
debug.Log("Index for pack contains gaps / overlaps, blobs: %v", blobs)
errs = append(errs, errors.New("index for pack contains gaps / overlapping blobs"))
}
// calculate hash on-the-fly while reading the pack and capture pack header
var hash restic.ID
var hdrBuf []byte
h := backend.Handle{Type: backend.PackFile, Name: id.String()}
err := r.be.Load(ctx, h, int(size), 0, func(rd io.Reader) error {
hrd := hashing.NewReader(rd, sha256.New())
bufRd.Reset(hrd)
it := newPackBlobIterator(id, newBufReader(bufRd), 0, blobs, r.Key(), dec)
for {
val, err := it.Next()
if err == errPackEOF {
break
} else if err != nil {
return &partialReadError{err}
}
debug.Log(" check blob %v: %v", val.Handle.ID, val.Handle)
if val.Err != nil {
debug.Log(" error verifying blob %v: %v", val.Handle.ID, val.Err)
errs = append(errs, errors.Errorf("blob %v: %v", val.Handle.ID, val.Err))
}
}
// skip enough bytes until we reach the possible header start
curPos := lastBlobEnd
minHdrStart := int(size) - pack.MaxHeaderSize
if minHdrStart > curPos {
_, err := bufRd.Discard(minHdrStart - curPos)
if err != nil {
return &partialReadError{err}
}
curPos += minHdrStart - curPos
}
// read remainder, which should be the pack header
var err error
hdrBuf = make([]byte, int(size-int64(curPos)))
_, err = io.ReadFull(bufRd, hdrBuf)
if err != nil {
return &partialReadError{err}
}
hash = restic.IDFromHash(hrd.Sum(nil))
return nil
})
if err != nil {
var e *partialReadError
isPartialReadError := errors.As(err, &e)
// failed to load the pack file, return as further checks cannot succeed anyways
debug.Log(" error streaming pack (partial %v): %v", isPartialReadError, err)
if isPartialReadError {
return &ErrPackData{PackID: id, errs: append(errs, fmt.Errorf("partial download error: %w", err))}
}
// The check command suggests to repair files for which a `ErrPackData` is returned. However, this file
// completely failed to download such that there's no point in repairing anything.
return fmt.Errorf("download error: %w", err)
}
if !hash.Equal(id) {
debug.Log("pack ID does not match, want %v, got %v", id, hash)
return &ErrPackData{PackID: id, errs: append(errs, errors.Errorf("unexpected pack id %v", hash))}
}
blobs, hdrSize, err := pack.List(r.Key(), bytes.NewReader(hdrBuf), int64(len(hdrBuf)))
if err != nil {
return &ErrPackData{PackID: id, errs: append(errs, err)}
}
if uint32(idxHdrSize) != hdrSize {
debug.Log("Pack header size does not match, want %v, got %v", idxHdrSize, hdrSize)
errs = append(errs, errors.Errorf("pack header size does not match, want %v, got %v", idxHdrSize, hdrSize))
}
idx := r.Index()
for _, blob := range blobs {
// Check if blob is contained in index and position is correct
idxHas := false
for _, pb := range idx.Lookup(blob.BlobHandle) {
if pb.PackID == id && pb.Blob == blob {
idxHas = true
break
}
}
if !idxHas {
errs = append(errs, errors.Errorf("blob %v is not contained in index or position is incorrect", blob.ID))
continue
}
}
if len(errs) > 0 {
return &ErrPackData{PackID: id, errs: errs}
}
return nil
}
type bufReader struct {
rd *bufio.Reader
buf []byte
}
func newBufReader(rd *bufio.Reader) *bufReader {
return &bufReader{
rd: rd,
}
}
func (b *bufReader) Discard(n int) (discarded int, err error) {
return b.rd.Discard(n)
}
func (b *bufReader) ReadFull(n int) (buf []byte, err error) {
if cap(b.buf) < n {
b.buf = make([]byte, n)
}
b.buf = b.buf[:n]
_, err = io.ReadFull(b.rd, b.buf)
if err != nil {
return nil, err
}
return b.buf, nil
}

View File

@ -18,7 +18,7 @@ func FuzzSaveLoadBlob(f *testing.F) {
}
id := restic.Hash(blob)
repo := TestRepositoryWithVersion(t, 2)
repo, _ := TestRepositoryWithVersion(t, 2)
var wg errgroup.Group
repo.StartPackUploader(context.TODO(), &wg)

View File

@ -178,8 +178,7 @@ func SearchKey(ctx context.Context, s *Repository, password string, maxKeys int,
// LoadKey loads a key from the backend.
func LoadKey(ctx context.Context, s *Repository, id restic.ID) (k *Key, err error) {
h := backend.Handle{Type: restic.KeyFile, Name: id.String()}
data, err := backend.LoadAll(ctx, nil, s.be, h)
data, err := s.LoadRaw(ctx, restic.KeyFile, id)
if err != nil {
return nil, err
}

View File

@ -36,13 +36,13 @@ var lockerInst = &locker{
refreshabilityTimeout: restic.StaleLockTimeout - defaultRefreshInterval*3/2,
}
func Lock(ctx context.Context, repo restic.Repository, exclusive bool, retryLock time.Duration, printRetry func(msg string), logger func(format string, args ...interface{})) (*Unlocker, context.Context, error) {
func Lock(ctx context.Context, repo *Repository, exclusive bool, retryLock time.Duration, printRetry func(msg string), logger func(format string, args ...interface{})) (*Unlocker, context.Context, error) {
return lockerInst.Lock(ctx, repo, exclusive, retryLock, printRetry, logger)
}
// Lock wraps the ctx such that it is cancelled when the repository is unlocked
// cancelling the original context also stops the lock refresh
func (l *locker) Lock(ctx context.Context, repo restic.Repository, exclusive bool, retryLock time.Duration, printRetry func(msg string), logger func(format string, args ...interface{})) (*Unlocker, context.Context, error) {
func (l *locker) Lock(ctx context.Context, repo *Repository, exclusive bool, retryLock time.Duration, printRetry func(msg string), logger func(format string, args ...interface{})) (*Unlocker, context.Context, error) {
lockFn := restic.NewLock
if exclusive {
@ -102,7 +102,7 @@ retryLoop:
refreshChan := make(chan struct{})
forceRefreshChan := make(chan refreshLockRequest)
go l.refreshLocks(ctx, repo.Backend(), lockInfo, refreshChan, forceRefreshChan, logger)
go l.refreshLocks(ctx, repo.be, lockInfo, refreshChan, forceRefreshChan, logger)
go l.monitorLockRefresh(ctx, lockInfo, refreshChan, forceRefreshChan, logger)
return &Unlocker{lockInfo}, ctx, nil
@ -132,7 +132,7 @@ func (l *locker) refreshLocks(ctx context.Context, backend backend.Backend, lock
// remove the lock from the repo
debug.Log("unlocking repository with lock %v", lock)
if err := lock.Unlock(); err != nil {
if err := lock.Unlock(ctx); err != nil {
debug.Log("error while unlocking: %v", err)
logger("error while unlocking: %v", err)
}

View File

@ -19,7 +19,7 @@ import (
type backendWrapper func(r backend.Backend) (backend.Backend, error)
func openLockTestRepo(t *testing.T, wrapper backendWrapper) restic.Repository {
func openLockTestRepo(t *testing.T, wrapper backendWrapper) (*Repository, backend.Backend) {
be := backend.Backend(mem.New())
// initialize repo
TestRepositoryWithBackend(t, be, 0, Options{})
@ -31,10 +31,10 @@ func openLockTestRepo(t *testing.T, wrapper backendWrapper) restic.Repository {
rtest.OK(t, err)
}
return TestOpenBackend(t, be)
return TestOpenBackend(t, be), be
}
func checkedLockRepo(ctx context.Context, t *testing.T, repo restic.Repository, lockerInst *locker, retryLock time.Duration) (*Unlocker, context.Context) {
func checkedLockRepo(ctx context.Context, t *testing.T, repo *Repository, lockerInst *locker, retryLock time.Duration) (*Unlocker, context.Context) {
lock, wrappedCtx, err := lockerInst.Lock(ctx, repo, false, retryLock, func(msg string) {}, func(format string, args ...interface{}) {})
test.OK(t, err)
test.OK(t, wrappedCtx.Err())
@ -46,7 +46,7 @@ func checkedLockRepo(ctx context.Context, t *testing.T, repo restic.Repository,
func TestLock(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, nil)
repo, _ := openLockTestRepo(t, nil)
lock, wrappedCtx := checkedLockRepo(context.Background(), t, repo, lockerInst, 0)
lock.Unlock()
@ -57,7 +57,7 @@ func TestLock(t *testing.T) {
func TestLockCancel(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, nil)
repo, _ := openLockTestRepo(t, nil)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -73,8 +73,8 @@ func TestLockCancel(t *testing.T) {
func TestLockConflict(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, nil)
repo2 := TestOpenBackend(t, repo.Backend())
repo, be := openLockTestRepo(t, nil)
repo2 := TestOpenBackend(t, be)
lock, _, err := Lock(context.Background(), repo, true, 0, func(msg string) {}, func(format string, args ...interface{}) {})
test.OK(t, err)
@ -101,7 +101,7 @@ func (b *writeOnceBackend) Save(ctx context.Context, h backend.Handle, rd backen
func TestLockFailedRefresh(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, func(r backend.Backend) (backend.Backend, error) {
repo, _ := openLockTestRepo(t, func(r backend.Backend) (backend.Backend, error) {
return &writeOnceBackend{Backend: r}, nil
})
@ -138,7 +138,7 @@ func (b *loggingBackend) Save(ctx context.Context, h backend.Handle, rd backend.
func TestLockSuccessfulRefresh(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, func(r backend.Backend) (backend.Backend, error) {
repo, _ := openLockTestRepo(t, func(r backend.Backend) (backend.Backend, error) {
return &loggingBackend{
Backend: r,
t: t,
@ -190,7 +190,7 @@ func (b *slowBackend) Save(ctx context.Context, h backend.Handle, rd backend.Rew
func TestLockSuccessfulStaleRefresh(t *testing.T) {
t.Parallel()
var sb *slowBackend
repo := openLockTestRepo(t, func(r backend.Backend) (backend.Backend, error) {
repo, _ := openLockTestRepo(t, func(r backend.Backend) (backend.Backend, error) {
sb = &slowBackend{Backend: r}
return sb, nil
})
@ -238,7 +238,7 @@ func TestLockSuccessfulStaleRefresh(t *testing.T) {
func TestLockWaitTimeout(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, nil)
repo, _ := openLockTestRepo(t, nil)
elock, _, err := Lock(context.TODO(), repo, true, 0, func(msg string) {}, func(format string, args ...interface{}) {})
test.OK(t, err)
@ -260,7 +260,7 @@ func TestLockWaitTimeout(t *testing.T) {
func TestLockWaitCancel(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, nil)
repo, _ := openLockTestRepo(t, nil)
elock, _, err := Lock(context.TODO(), repo, true, 0, func(msg string) {}, func(format string, args ...interface{}) {})
test.OK(t, err)
@ -286,7 +286,7 @@ func TestLockWaitCancel(t *testing.T) {
func TestLockWaitSuccess(t *testing.T) {
t.Parallel()
repo := openLockTestRepo(t, nil)
repo, _ := openLockTestRepo(t, nil)
elock, _, err := Lock(context.TODO(), repo, true, 0, func(msg string) {}, func(format string, args ...interface{}) {})
test.OK(t, err)

View File

@ -444,7 +444,7 @@ func decidePackAction(ctx context.Context, opts PruneOptions, repo restic.Reposi
// This is equivalent to sorting by unused / total space.
// Instead of unused[i] / used[i] > unused[j] / used[j] we use
// unused[i] * used[j] > unused[j] * used[i] as uint32*uint32 < uint64
// Moreover packs containing trees and too small packs are sorted to the beginning
// Moreover packs containing trees and too short packs are sorted to the beginning
sort.Slice(repackCandidates, func(i, j int) bool {
pi := repackCandidates[i].packInfo
pj := repackCandidates[j].packInfo
@ -621,7 +621,7 @@ func (plan *PrunePlan) Execute(ctx context.Context, printer progress.Printer) (e
// deleteFiles deletes the given fileList of fileType in parallel
// if ignoreError=true, it will print a warning if there was an error, else it will abort.
func deleteFiles(ctx context.Context, ignoreError bool, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType, printer progress.Printer) error {
func deleteFiles(ctx context.Context, ignoreError bool, repo restic.RemoverUnpacked, fileList restic.IDSet, fileType restic.FileType, printer progress.Printer) error {
bar := printer.NewCounter("files deleted")
defer bar.Done()

View File

@ -14,7 +14,7 @@ import (
)
func testPrune(t *testing.T, opts repository.PruneOptions, errOnUnused bool) {
repo := repository.TestRepository(t).(*repository.Repository)
repo, be := repository.TestRepositoryWithVersion(t, 0)
createRandomBlobs(t, repo, 4, 0.5, true)
createRandomBlobs(t, repo, 5, 0.5, true)
keep, _ := selectBlobs(t, repo, 0.5)
@ -37,7 +37,7 @@ func testPrune(t *testing.T, opts repository.PruneOptions, errOnUnused bool) {
rtest.OK(t, plan.Execute(context.TODO(), &progress.NoopPrinter{}))
repo = repository.TestOpenBackend(t, repo.Backend()).(*repository.Repository)
repo = repository.TestOpenBackend(t, be)
checker.TestCheckRepo(t, repo, true)
if errOnUnused {

View File

@ -0,0 +1,56 @@
package repository
import (
"bytes"
"context"
"fmt"
"io"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/restic"
)
// LoadRaw reads all data stored in the backend for the file with id and filetype t.
// If the backend returns data that does not match the id, then the buffer is returned
// along with an error that is a restic.ErrInvalidData error.
func (r *Repository) LoadRaw(ctx context.Context, t restic.FileType, id restic.ID) (buf []byte, err error) {
h := backend.Handle{Type: t, Name: id.String()}
buf, err = loadRaw(ctx, r.be, h)
// retry loading damaged data only once. If a file fails to download correctly
// the second time, then it is likely corrupted at the backend.
if h.Type != backend.ConfigFile && id != restic.Hash(buf) {
if r.Cache != nil {
// Cleanup cache to make sure it's not the cached copy that is broken.
// Ignore error as there's not much we can do in that case.
_ = r.Cache.Forget(h)
}
buf, err = loadRaw(ctx, r.be, h)
if err == nil && id != restic.Hash(buf) {
// Return corrupted data to the caller if it is still broken the second time to
// let the caller decide what to do with the data.
return buf, fmt.Errorf("LoadRaw(%v): %w", h, restic.ErrInvalidData)
}
}
if err != nil {
return nil, err
}
return buf, nil
}
func loadRaw(ctx context.Context, be backend.Backend, h backend.Handle) (buf []byte, err error) {
err = be.Load(ctx, h, 0, 0, func(rd io.Reader) error {
wr := new(bytes.Buffer)
_, cerr := io.Copy(wr, rd)
if cerr != nil {
return cerr
}
buf = wr.Bytes()
return cerr
})
return buf, err
}

View File

@ -0,0 +1,108 @@
package repository_test
import (
"bytes"
"context"
"io"
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/backend/mem"
"github.com/restic/restic/internal/backend/mock"
"github.com/restic/restic/internal/cache"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
const KiB = 1 << 10
const MiB = 1 << 20
func TestLoadRaw(t *testing.T) {
b := mem.New()
repo, err := repository.New(b, repository.Options{})
rtest.OK(t, err)
for i := 0; i < 5; i++ {
data := rtest.Random(23+i, 500*KiB)
id := restic.Hash(data)
h := backend.Handle{Name: id.String(), Type: backend.PackFile}
err := b.Save(context.TODO(), h, backend.NewByteReader(data, b.Hasher()))
rtest.OK(t, err)
buf, err := repo.LoadRaw(context.TODO(), backend.PackFile, id)
rtest.OK(t, err)
if len(buf) != len(data) {
t.Errorf("length of returned buffer does not match, want %d, got %d", len(data), len(buf))
continue
}
if !bytes.Equal(buf, data) {
t.Errorf("wrong data returned")
continue
}
}
}
func TestLoadRawBroken(t *testing.T) {
b := mock.NewBackend()
repo, err := repository.New(b, repository.Options{})
rtest.OK(t, err)
data := rtest.Random(23, 10*KiB)
id := restic.Hash(data)
// damage buffer
data[0] ^= 0xff
b.OpenReaderFn = func(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
return io.NopCloser(bytes.NewReader(data)), nil
}
// must detect but still return corrupt data
buf, err := repo.LoadRaw(context.TODO(), backend.PackFile, id)
rtest.Assert(t, bytes.Equal(buf, data), "wrong data returned")
rtest.Assert(t, errors.Is(err, restic.ErrInvalidData), "missing expected ErrInvalidData error, got %v", err)
// cause the first access to fail, but repair the data for the second access
data[0] ^= 0xff
loadCtr := 0
b.OpenReaderFn = func(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
data[0] ^= 0xff
loadCtr++
return io.NopCloser(bytes.NewReader(data)), nil
}
// must retry load of corrupted data
buf, err = repo.LoadRaw(context.TODO(), backend.PackFile, id)
rtest.OK(t, err)
rtest.Assert(t, bytes.Equal(buf, data), "wrong data returned")
rtest.Equals(t, 2, loadCtr, "missing retry on broken data")
}
func TestLoadRawBrokenWithCache(t *testing.T) {
b := mock.NewBackend()
c := cache.TestNewCache(t)
repo, err := repository.New(b, repository.Options{})
rtest.OK(t, err)
repo.UseCache(c)
data := rtest.Random(23, 10*KiB)
id := restic.Hash(data)
loadCtr := 0
// cause the first access to fail, but repair the data for the second access
b.OpenReaderFn = func(ctx context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
data[0] ^= 0xff
loadCtr++
return io.NopCloser(bytes.NewReader(data)), nil
}
// must retry load of corrupted data
buf, err := repo.LoadRaw(context.TODO(), backend.SnapshotFile, id)
rtest.OK(t, err)
rtest.Assert(t, bytes.Equal(buf, data), "wrong data returned")
rtest.Equals(t, 2, loadCtr, "missing retry on broken data")
}

View File

@ -79,13 +79,8 @@ func repack(ctx context.Context, repo restic.Repository, dstRepo restic.Reposito
for t := range downloadQueue {
err := repo.LoadBlobsFromPack(wgCtx, t.PackID, t.Blobs, func(blob restic.BlobHandle, buf []byte, err error) error {
if err != nil {
var ierr error
// check whether we can get a valid copy somewhere else
buf, ierr = repo.LoadBlob(wgCtx, blob.Type, blob.ID, nil)
if ierr != nil {
// no luck, return the original error
return err
}
// a required blob couldn't be retrieved
return err
}
keepMutex.Lock()

View File

@ -167,7 +167,7 @@ func repack(t *testing.T, repo restic.Repository, packs restic.IDSet, blobs rest
}
for id := range repackedBlobs {
err = repo.Backend().Remove(context.TODO(), backend.Handle{Type: restic.PackFile, Name: id.String()})
err = repo.RemoveUnpacked(context.TODO(), restic.PackFile, id)
if err != nil {
t.Fatal(err)
}
@ -215,7 +215,7 @@ func TestRepack(t *testing.T) {
}
func testRepack(t *testing.T, version uint) {
repo := repository.TestRepositoryWithVersion(t, version)
repo, _ := repository.TestRepositoryWithVersion(t, version)
seed := time.Now().UnixNano()
rand.Seed(seed)
@ -293,8 +293,8 @@ func (r oneConnectionRepo) Connections() uint {
}
func testRepackCopy(t *testing.T, version uint) {
repo := repository.TestRepositoryWithVersion(t, version)
dstRepo := repository.TestRepositoryWithVersion(t, version)
repo, _ := repository.TestRepositoryWithVersion(t, version)
dstRepo, _ := repository.TestRepositoryWithVersion(t, version)
// test with minimal possible connection count
repoWrapped := &oneConnectionRepo{repo}
@ -340,7 +340,7 @@ func TestRepackWrongBlob(t *testing.T) {
func testRepackWrongBlob(t *testing.T, version uint) {
// disable verification to allow adding corrupted blobs to the repository
repo := repository.TestRepositoryWithBackend(t, nil, version, repository.Options{NoExtraVerify: true})
repo, _ := repository.TestRepositoryWithBackend(t, nil, version, repository.Options{NoExtraVerify: true})
seed := time.Now().UnixNano()
rand.Seed(seed)
@ -366,7 +366,7 @@ func TestRepackBlobFallback(t *testing.T) {
func testRepackBlobFallback(t *testing.T, version uint) {
// disable verification to allow adding corrupted blobs to the repository
repo := repository.TestRepositoryWithBackend(t, nil, version, repository.Options{NoExtraVerify: true})
repo, _ := repository.TestRepositoryWithBackend(t, nil, version, repository.Options{NoExtraVerify: true})
seed := time.Now().UnixNano()
rand.Seed(seed)

View File

@ -16,16 +16,16 @@ func listIndex(t *testing.T, repo restic.Lister) restic.IDSet {
return listFiles(t, repo, restic.IndexFile)
}
func testRebuildIndex(t *testing.T, readAllPacks bool, damage func(t *testing.T, repo *repository.Repository)) {
repo := repository.TestRepository(t).(*repository.Repository)
func testRebuildIndex(t *testing.T, readAllPacks bool, damage func(t *testing.T, repo *repository.Repository, be backend.Backend)) {
repo, be := repository.TestRepositoryWithVersion(t, 0)
createRandomBlobs(t, repo, 4, 0.5, true)
createRandomBlobs(t, repo, 5, 0.5, true)
indexes := listIndex(t, repo)
t.Logf("old indexes %v", indexes)
damage(t, repo)
damage(t, repo, be)
repo = repository.TestOpenBackend(t, repo.Backend()).(*repository.Repository)
repo = repository.TestOpenBackend(t, be)
rtest.OK(t, repository.RepairIndex(context.TODO(), repo, repository.RepairIndexOptions{
ReadAllPacks: readAllPacks,
}, &progress.NoopPrinter{}))
@ -40,17 +40,17 @@ func testRebuildIndex(t *testing.T, readAllPacks bool, damage func(t *testing.T,
func TestRebuildIndex(t *testing.T) {
for _, test := range []struct {
name string
damage func(t *testing.T, repo *repository.Repository)
damage func(t *testing.T, repo *repository.Repository, be backend.Backend)
}{
{
"valid index",
func(t *testing.T, repo *repository.Repository) {},
func(t *testing.T, repo *repository.Repository, be backend.Backend) {},
},
{
"damaged index",
func(t *testing.T, repo *repository.Repository) {
func(t *testing.T, repo *repository.Repository, be backend.Backend) {
index := listIndex(t, repo).List()[0]
replaceFile(t, repo, backend.Handle{Type: restic.IndexFile, Name: index.String()}, func(b []byte) []byte {
replaceFile(t, be, backend.Handle{Type: restic.IndexFile, Name: index.String()}, func(b []byte) []byte {
b[0] ^= 0xff
return b
})
@ -58,16 +58,16 @@ func TestRebuildIndex(t *testing.T) {
},
{
"missing index",
func(t *testing.T, repo *repository.Repository) {
func(t *testing.T, repo *repository.Repository, be backend.Backend) {
index := listIndex(t, repo).List()[0]
rtest.OK(t, repo.Backend().Remove(context.TODO(), backend.Handle{Type: restic.IndexFile, Name: index.String()}))
rtest.OK(t, be.Remove(context.TODO(), backend.Handle{Type: restic.IndexFile, Name: index.String()}))
},
},
{
"missing pack",
func(t *testing.T, repo *repository.Repository) {
func(t *testing.T, repo *repository.Repository, be backend.Backend) {
pack := listPacks(t, repo).List()[0]
rtest.OK(t, repo.Backend().Remove(context.TODO(), backend.Handle{Type: restic.PackFile, Name: pack.String()}))
rtest.OK(t, be.Remove(context.TODO(), backend.Handle{Type: restic.PackFile, Name: pack.String()}))
},
},
} {

View File

@ -31,12 +31,8 @@ func RepairPacks(ctx context.Context, repo restic.Repository, ids restic.IDSet,
err := repo.LoadBlobsFromPack(wgCtx, b.PackID, blobs, func(blob restic.BlobHandle, buf []byte, err error) error {
if err != nil {
// Fallback path
buf, err = repo.LoadBlob(wgCtx, blob.Type, blob.ID, nil)
if err != nil {
printer.E("failed to load blob %v: %v", blob.ID, err)
return nil
}
printer.E("failed to load blob %v: %v", blob.ID, err)
return nil
}
id, _, _, err := repo.SaveBlob(wgCtx, blob.Type, buf, restic.ID{}, true)
if !id.Equal(blob.ID) {

View File

@ -7,6 +7,7 @@ import (
"time"
"github.com/restic/restic/internal/backend"
backendtest "github.com/restic/restic/internal/backend/test"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@ -23,12 +24,12 @@ func listBlobs(repo restic.Repository) restic.BlobSet {
return blobs
}
func replaceFile(t *testing.T, repo restic.Repository, h backend.Handle, damage func([]byte) []byte) {
buf, err := backend.LoadAll(context.TODO(), nil, repo.Backend(), h)
func replaceFile(t *testing.T, be backend.Backend, h backend.Handle, damage func([]byte) []byte) {
buf, err := backendtest.LoadAll(context.TODO(), be, h)
test.OK(t, err)
buf = damage(buf)
test.OK(t, repo.Backend().Remove(context.TODO(), h))
test.OK(t, repo.Backend().Save(context.TODO(), h, backend.NewByteReader(buf, repo.Backend().Hasher())))
test.OK(t, be.Remove(context.TODO(), h))
test.OK(t, be.Save(context.TODO(), h, backend.NewByteReader(buf, be.Hasher())))
}
func TestRepairBrokenPack(t *testing.T) {
@ -38,17 +39,17 @@ func TestRepairBrokenPack(t *testing.T) {
func testRepairBrokenPack(t *testing.T, version uint) {
tests := []struct {
name string
damage func(t *testing.T, repo restic.Repository, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet)
damage func(t *testing.T, repo *repository.Repository, be backend.Backend, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet)
}{
{
"valid pack",
func(t *testing.T, repo restic.Repository, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
func(t *testing.T, repo *repository.Repository, be backend.Backend, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
return packsBefore, restic.NewBlobSet()
},
},
{
"broken pack",
func(t *testing.T, repo restic.Repository, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
func(t *testing.T, repo *repository.Repository, be backend.Backend, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
wrongBlob := createRandomWrongBlob(t, repo)
damagedPacks := findPacksForBlobs(t, repo, restic.NewBlobSet(wrongBlob))
return damagedPacks, restic.NewBlobSet(wrongBlob)
@ -56,10 +57,10 @@ func testRepairBrokenPack(t *testing.T, version uint) {
},
{
"partially broken pack",
func(t *testing.T, repo restic.Repository, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
func(t *testing.T, repo *repository.Repository, be backend.Backend, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
// damage one of the pack files
damagedID := packsBefore.List()[0]
replaceFile(t, repo, backend.Handle{Type: backend.PackFile, Name: damagedID.String()},
replaceFile(t, be, backend.Handle{Type: backend.PackFile, Name: damagedID.String()},
func(buf []byte) []byte {
buf[0] ^= 0xff
return buf
@ -79,10 +80,10 @@ func testRepairBrokenPack(t *testing.T, version uint) {
},
}, {
"truncated pack",
func(t *testing.T, repo restic.Repository, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
func(t *testing.T, repo *repository.Repository, be backend.Backend, packsBefore restic.IDSet) (restic.IDSet, restic.BlobSet) {
// damage one of the pack files
damagedID := packsBefore.List()[0]
replaceFile(t, repo, backend.Handle{Type: backend.PackFile, Name: damagedID.String()},
replaceFile(t, be, backend.Handle{Type: backend.PackFile, Name: damagedID.String()},
func(buf []byte) []byte {
buf = buf[0:10]
return buf
@ -103,7 +104,7 @@ func testRepairBrokenPack(t *testing.T, version uint) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
// disable verification to allow adding corrupted blobs to the repository
repo := repository.TestRepositoryWithBackend(t, nil, version, repository.Options{NoExtraVerify: true})
repo, be := repository.TestRepositoryWithBackend(t, nil, version, repository.Options{NoExtraVerify: true})
seed := time.Now().UnixNano()
rand.Seed(seed)
@ -113,7 +114,7 @@ func testRepairBrokenPack(t *testing.T, version uint) {
packsBefore := listPacks(t, repo)
blobsBefore := listBlobs(repo)
toRepair, damagedBlobs := test.damage(t, repo, packsBefore)
toRepair, damagedBlobs := test.damage(t, repo, be, packsBefore)
rtest.OK(t, repository.RepairPacks(context.TODO(), repo, toRepair, &progress.NoopPrinter{}))
// reload index

View File

@ -1,7 +1,6 @@
package repository
import (
"bufio"
"bytes"
"context"
"fmt"
@ -12,7 +11,6 @@ import (
"sort"
"sync"
"github.com/cenkalti/backoff/v4"
"github.com/klauspost/compress/zstd"
"github.com/restic/chunker"
"github.com/restic/restic/internal/backend"
@ -29,8 +27,6 @@ import (
"golang.org/x/sync/errgroup"
)
const MaxStreamBufferSize = 4 * 1024 * 1024
const MinPackSize = 4 * 1024 * 1024
const DefaultPackSize = 16 * 1024 * 1024
const MaxPackSize = 128 * 1024 * 1024
@ -178,46 +174,11 @@ func (r *Repository) LoadUnpacked(ctx context.Context, t restic.FileType, id res
id = restic.ID{}
}
ctx, cancel := context.WithCancel(ctx)
h := backend.Handle{Type: t, Name: id.String()}
retriedInvalidData := false
var dataErr error
wr := new(bytes.Buffer)
err := r.be.Load(ctx, h, 0, 0, func(rd io.Reader) error {
// make sure this call is idempotent, in case an error occurs
wr.Reset()
_, cerr := io.Copy(wr, rd)
if cerr != nil {
return cerr
}
buf := wr.Bytes()
if t != restic.ConfigFile && !restic.Hash(buf).Equal(id) {
debug.Log("retry loading broken blob %v", h)
if !retriedInvalidData {
retriedInvalidData = true
} else {
// with a canceled context there is not guarantee which error will
// be returned by `be.Load`.
dataErr = fmt.Errorf("load(%v): %w", h, restic.ErrInvalidData)
cancel()
}
return restic.ErrInvalidData
}
return nil
})
if dataErr != nil {
return nil, dataErr
}
buf, err := r.LoadRaw(ctx, t, id)
if err != nil {
return nil, err
}
buf := wr.Bytes()
nonce, ciphertext := buf[:r.key.NonceSize()], buf[r.key.NonceSize():]
plaintext, err := r.key.Open(ciphertext[:0], nonce, ciphertext, nil)
if err != nil {
@ -274,16 +235,27 @@ func (r *Repository) LoadBlob(ctx context.Context, t restic.BlobType, id restic.
// try cached pack files first
sortCachedPacksFirst(r.Cache, blobs)
var lastError error
for _, blob := range blobs {
debug.Log("blob %v/%v found: %v", t, id, blob)
if blob.Type != t {
debug.Log("blob %v has wrong block type, want %v", blob, t)
buf, err := r.loadBlob(ctx, blobs, buf)
if err != nil {
if r.Cache != nil {
for _, blob := range blobs {
h := backend.Handle{Type: restic.PackFile, Name: blob.PackID.String(), IsMetadata: blob.Type.IsMetadata()}
// ignore errors as there's not much we can do here
_ = r.Cache.Forget(h)
}
}
buf, err = r.loadBlob(ctx, blobs, buf)
}
return buf, err
}
func (r *Repository) loadBlob(ctx context.Context, blobs []restic.PackedBlob, buf []byte) ([]byte, error) {
var lastError error
for _, blob := range blobs {
debug.Log("blob %v found: %v", blob.BlobHandle, blob)
// load blob from pack
h := backend.Handle{Type: restic.PackFile, Name: blob.PackID.String(), IsMetadata: t.IsMetadata()}
h := backend.Handle{Type: restic.PackFile, Name: blob.PackID.String(), IsMetadata: blob.Type.IsMetadata()}
switch {
case cap(buf) < int(blob.Length):
@ -292,42 +264,26 @@ func (r *Repository) LoadBlob(ctx context.Context, t restic.BlobType, id restic.
buf = buf[:blob.Length]
}
n, err := backend.ReadAt(ctx, r.be, h, int64(blob.Offset), buf)
_, err := backend.ReadAt(ctx, r.be, h, int64(blob.Offset), buf)
if err != nil {
debug.Log("error loading blob %v: %v", blob, err)
lastError = err
continue
}
if uint(n) != blob.Length {
lastError = errors.Errorf("error loading blob %v: wrong length returned, want %d, got %d",
id.Str(), blob.Length, uint(n))
debug.Log("lastError: %v", lastError)
continue
}
it := newPackBlobIterator(blob.PackID, newByteReader(buf), uint(blob.Offset), []restic.Blob{blob.Blob}, r.key, r.getZstdDecoder())
pbv, err := it.Next()
// decrypt
nonce, ciphertext := buf[:r.key.NonceSize()], buf[r.key.NonceSize():]
plaintext, err := r.key.Open(ciphertext[:0], nonce, ciphertext, nil)
if err == nil {
err = pbv.Err
}
if err != nil {
lastError = errors.Errorf("decrypting blob %v failed: %v", id, err)
continue
}
if blob.IsCompressed() {
plaintext, err = r.getZstdDecoder().DecodeAll(plaintext, make([]byte, 0, blob.DataLength()))
if err != nil {
lastError = errors.Errorf("decompressing blob %v failed: %v", id, err)
continue
}
}
// check hash
if !restic.Hash(plaintext).Equal(id) {
lastError = errors.Errorf("blob %v returned invalid hash", id)
debug.Log("error decoding blob %v: %v", blob, err)
lastError = err
continue
}
plaintext := pbv.Plaintext
if len(plaintext) > cap(buf) {
return plaintext, nil
}
@ -341,7 +297,7 @@ func (r *Repository) LoadBlob(ctx context.Context, t restic.BlobType, id restic.
return nil, lastError
}
return nil, errors.Errorf("loading blob %v from %v packs failed", id.Str(), len(blobs))
return nil, errors.Errorf("loading %v from %v packs failed", blobs[0].BlobHandle, len(blobs))
}
// LookupBlobSize returns the size of blob id.
@ -564,6 +520,11 @@ func (r *Repository) verifyUnpacked(buf []byte, t restic.FileType, expected []by
return nil
}
func (r *Repository) RemoveUnpacked(ctx context.Context, t restic.FileType, id restic.ID) error {
// TODO prevent everything except removing snapshots for non-repository code
return r.be.Remove(ctx, backend.Handle{Type: t, Name: id.String()})
}
// Flush saves all remaining packs and the index
func (r *Repository) Flush(ctx context.Context) error {
if err := r.flushPacks(ctx); err != nil {
@ -618,11 +579,6 @@ func (r *Repository) flushPacks(ctx context.Context) error {
return err
}
// Backend returns the backend for the repository.
func (r *Repository) Backend() backend.Backend {
return r.be
}
func (r *Repository) Connections() uint {
return r.be.Connections()
}
@ -913,7 +869,17 @@ func (r *Repository) List(ctx context.Context, t restic.FileType, fn func(restic
func (r *Repository) ListPack(ctx context.Context, id restic.ID, size int64) ([]restic.Blob, uint32, error) {
h := backend.Handle{Type: restic.PackFile, Name: id.String()}
return pack.List(r.Key(), backend.ReaderAt(ctx, r.Backend(), h), size)
entries, hdrSize, err := pack.List(r.Key(), backend.ReaderAt(ctx, r.be, h), size)
if err != nil {
if r.Cache != nil {
// ignore error as there is not much we can do here
_ = r.Cache.Forget(h)
}
// retry on error
entries, hdrSize, err = pack.List(r.Key(), backend.ReaderAt(ctx, r.be, h), size)
}
return entries, hdrSize, err
}
// Delete calls backend.Delete() if implemented, and returns an error
@ -966,19 +932,21 @@ func (r *Repository) SaveBlob(ctx context.Context, t restic.BlobType, buf []byte
}
type backendLoadFn func(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error
type loadBlobFn func(ctx context.Context, t restic.BlobType, id restic.ID, buf []byte) ([]byte, error)
// Skip sections with more than 4MB unused blobs
const maxUnusedRange = 4 * 1024 * 1024
// Skip sections with more than 1MB unused blobs
const maxUnusedRange = 1 * 1024 * 1024
// LoadBlobsFromPack loads the listed blobs from the specified pack file. The plaintext blob is passed to
// the handleBlobFn callback or an error if decryption failed or the blob hash does not match.
// handleBlobFn is called at most once for each blob. If the callback returns an error,
// then LoadBlobsFromPack will abort and not retry it.
// then LoadBlobsFromPack will abort and not retry it. The buf passed to the callback is only valid within
// this specific call. The callback must not keep a reference to buf.
func (r *Repository) LoadBlobsFromPack(ctx context.Context, packID restic.ID, blobs []restic.Blob, handleBlobFn func(blob restic.BlobHandle, buf []byte, err error) error) error {
return streamPack(ctx, r.Backend().Load, r.key, packID, blobs, handleBlobFn)
return streamPack(ctx, r.be.Load, r.LoadBlob, r.getZstdDecoder(), r.key, packID, blobs, handleBlobFn)
}
func streamPack(ctx context.Context, beLoad backendLoadFn, key *crypto.Key, packID restic.ID, blobs []restic.Blob, handleBlobFn func(blob restic.BlobHandle, buf []byte, err error) error) error {
func streamPack(ctx context.Context, beLoad backendLoadFn, loadBlobFn loadBlobFn, dec *zstd.Decoder, key *crypto.Key, packID restic.ID, blobs []restic.Blob, handleBlobFn func(blob restic.BlobHandle, buf []byte, err error) error) error {
if len(blobs) == 0 {
// nothing to do
return nil
@ -990,14 +958,29 @@ func streamPack(ctx context.Context, beLoad backendLoadFn, key *crypto.Key, pack
lowerIdx := 0
lastPos := blobs[0].Offset
const maxChunkSize = 2 * DefaultPackSize
for i := 0; i < len(blobs); i++ {
if blobs[i].Offset < lastPos {
// don't wait for streamPackPart to fail
return errors.Errorf("overlapping blobs in pack %v", packID)
}
chunkSizeAfter := (blobs[i].Offset + blobs[i].Length) - blobs[lowerIdx].Offset
split := false
// split if the chunk would become larger than maxChunkSize. Oversized chunks are
// handled by the requirement that the chunk contains at least one blob (i > lowerIdx)
if i > lowerIdx && chunkSizeAfter >= maxChunkSize {
split = true
}
// skip too large gaps as a new request is typically much cheaper than data transfers
if blobs[i].Offset-lastPos > maxUnusedRange {
split = true
}
if split {
// load everything up to the skipped file section
err := streamPackPart(ctx, beLoad, key, packID, blobs[lowerIdx:i], handleBlobFn)
err := streamPackPart(ctx, beLoad, loadBlobFn, dec, key, packID, blobs[lowerIdx:i], handleBlobFn)
if err != nil {
return err
}
@ -1006,82 +989,133 @@ func streamPack(ctx context.Context, beLoad backendLoadFn, key *crypto.Key, pack
lastPos = blobs[i].Offset + blobs[i].Length
}
// load remainder
return streamPackPart(ctx, beLoad, key, packID, blobs[lowerIdx:], handleBlobFn)
return streamPackPart(ctx, beLoad, loadBlobFn, dec, key, packID, blobs[lowerIdx:], handleBlobFn)
}
func streamPackPart(ctx context.Context, beLoad backendLoadFn, key *crypto.Key, packID restic.ID, blobs []restic.Blob, handleBlobFn func(blob restic.BlobHandle, buf []byte, err error) error) error {
h := backend.Handle{Type: restic.PackFile, Name: packID.String(), IsMetadata: false}
func streamPackPart(ctx context.Context, beLoad backendLoadFn, loadBlobFn loadBlobFn, dec *zstd.Decoder, key *crypto.Key, packID restic.ID, blobs []restic.Blob, handleBlobFn func(blob restic.BlobHandle, buf []byte, err error) error) error {
h := backend.Handle{Type: restic.PackFile, Name: packID.String(), IsMetadata: blobs[0].Type.IsMetadata()}
dataStart := blobs[0].Offset
dataEnd := blobs[len(blobs)-1].Offset + blobs[len(blobs)-1].Length
debug.Log("streaming pack %v (%d to %d bytes), blobs: %v", packID, dataStart, dataEnd, len(blobs))
dec, err := zstd.NewReader(nil)
if err != nil {
panic(dec)
}
defer dec.Close()
ctx, cancel := context.WithCancel(ctx)
// stream blobs in pack
err = beLoad(ctx, h, int(dataEnd-dataStart), int64(dataStart), func(rd io.Reader) error {
// prevent callbacks after cancellation
if ctx.Err() != nil {
return ctx.Err()
}
bufferSize := int(dataEnd - dataStart)
if bufferSize > MaxStreamBufferSize {
bufferSize = MaxStreamBufferSize
}
bufRd := bufio.NewReaderSize(rd, bufferSize)
it := NewPackBlobIterator(packID, bufRd, dataStart, blobs, key, dec)
for {
val, err := it.Next()
if err == ErrPackEOF {
break
} else if err != nil {
return err
}
err = handleBlobFn(val.Handle, val.Plaintext, val.Err)
if err != nil {
cancel()
return backoff.Permanent(err)
}
// ensure that each blob is only passed once to handleBlobFn
blobs = blobs[1:]
}
return nil
data := make([]byte, int(dataEnd-dataStart))
err := beLoad(ctx, h, int(dataEnd-dataStart), int64(dataStart), func(rd io.Reader) error {
_, cerr := io.ReadFull(rd, data)
return cerr
})
// prevent callbacks after cancellation
if ctx.Err() != nil {
return ctx.Err()
}
if err != nil {
// the context is only still valid if handleBlobFn never returned an error
if loadBlobFn != nil {
// check whether we can get the remaining blobs somewhere else
for _, entry := range blobs {
buf, ierr := loadBlobFn(ctx, entry.Type, entry.ID, nil)
err = handleBlobFn(entry.BlobHandle, buf, ierr)
if err != nil {
break
}
}
}
return errors.Wrap(err, "StreamPack")
}
it := newPackBlobIterator(packID, newByteReader(data), dataStart, blobs, key, dec)
for {
val, err := it.Next()
if err == errPackEOF {
break
} else if err != nil {
return err
}
if val.Err != nil && loadBlobFn != nil {
var ierr error
// check whether we can get a valid copy somewhere else
buf, ierr := loadBlobFn(ctx, val.Handle.Type, val.Handle.ID, nil)
if ierr == nil {
// success
val.Plaintext = buf
val.Err = nil
}
}
err = handleBlobFn(val.Handle, val.Plaintext, val.Err)
if err != nil {
return err
}
// ensure that each blob is only passed once to handleBlobFn
blobs = blobs[1:]
}
return errors.Wrap(err, "StreamPack")
}
type PackBlobIterator struct {
// discardReader allows the PackBlobIterator to perform zero copy
// reads if the underlying data source is a byte slice.
type discardReader interface {
Discard(n int) (discarded int, err error)
// ReadFull reads the next n bytes into a byte slice. The caller must not
// retain a reference to the byte. Modifications are only allowed within
// the boundaries of the returned slice.
ReadFull(n int) (buf []byte, err error)
}
type byteReader struct {
buf []byte
}
func newByteReader(buf []byte) *byteReader {
return &byteReader{
buf: buf,
}
}
func (b *byteReader) Discard(n int) (discarded int, err error) {
if len(b.buf) < n {
return 0, io.ErrUnexpectedEOF
}
b.buf = b.buf[n:]
return n, nil
}
func (b *byteReader) ReadFull(n int) (buf []byte, err error) {
if len(b.buf) < n {
return nil, io.ErrUnexpectedEOF
}
buf = b.buf[:n]
b.buf = b.buf[n:]
return buf, nil
}
type packBlobIterator struct {
packID restic.ID
rd *bufio.Reader
rd discardReader
currentOffset uint
blobs []restic.Blob
key *crypto.Key
dec *zstd.Decoder
buf []byte
decode []byte
}
type PackBlobValue struct {
type packBlobValue struct {
Handle restic.BlobHandle
Plaintext []byte
Err error
}
var ErrPackEOF = errors.New("reached EOF of pack file")
var errPackEOF = errors.New("reached EOF of pack file")
func NewPackBlobIterator(packID restic.ID, rd *bufio.Reader, currentOffset uint,
blobs []restic.Blob, key *crypto.Key, dec *zstd.Decoder) *PackBlobIterator {
return &PackBlobIterator{
func newPackBlobIterator(packID restic.ID, rd discardReader, currentOffset uint,
blobs []restic.Blob, key *crypto.Key, dec *zstd.Decoder) *packBlobIterator {
return &packBlobIterator{
packID: packID,
rd: rd,
currentOffset: currentOffset,
@ -1092,9 +1126,9 @@ func NewPackBlobIterator(packID restic.ID, rd *bufio.Reader, currentOffset uint,
}
// Next returns the next blob, an error or ErrPackEOF if all blobs were read
func (b *PackBlobIterator) Next() (PackBlobValue, error) {
func (b *packBlobIterator) Next() (packBlobValue, error) {
if len(b.blobs) == 0 {
return PackBlobValue{}, ErrPackEOF
return packBlobValue{}, errPackEOF
}
entry := b.blobs[0]
@ -1102,42 +1136,33 @@ func (b *PackBlobIterator) Next() (PackBlobValue, error) {
skipBytes := int(entry.Offset - b.currentOffset)
if skipBytes < 0 {
return PackBlobValue{}, fmt.Errorf("overlapping blobs in pack %v", b.packID)
return packBlobValue{}, fmt.Errorf("overlapping blobs in pack %v", b.packID)
}
_, err := b.rd.Discard(skipBytes)
if err != nil {
return PackBlobValue{}, err
return packBlobValue{}, err
}
b.currentOffset = entry.Offset
h := restic.BlobHandle{ID: entry.ID, Type: entry.Type}
debug.Log(" process blob %v, skipped %d, %v", h, skipBytes, entry)
if uint(cap(b.buf)) < entry.Length {
b.buf = make([]byte, entry.Length)
}
b.buf = b.buf[:entry.Length]
n, err := io.ReadFull(b.rd, b.buf)
buf, err := b.rd.ReadFull(int(entry.Length))
if err != nil {
debug.Log(" read error %v", err)
return PackBlobValue{}, fmt.Errorf("readFull: %w", err)
return packBlobValue{}, fmt.Errorf("readFull: %w", err)
}
if n != len(b.buf) {
return PackBlobValue{}, fmt.Errorf("read blob %v from %v: not enough bytes read, want %v, got %v",
h, b.packID.Str(), len(b.buf), n)
}
b.currentOffset = entry.Offset + entry.Length
if int(entry.Length) <= b.key.NonceSize() {
debug.Log("%v", b.blobs)
return PackBlobValue{}, fmt.Errorf("invalid blob length %v", entry)
return packBlobValue{}, fmt.Errorf("invalid blob length %v", entry)
}
// decryption errors are likely permanent, give the caller a chance to skip them
nonce, ciphertext := b.buf[:b.key.NonceSize()], b.buf[b.key.NonceSize():]
nonce, ciphertext := buf[:b.key.NonceSize()], buf[b.key.NonceSize():]
plaintext, err := b.key.Open(ciphertext[:0], nonce, ciphertext, nil)
if err != nil {
err = fmt.Errorf("decrypting blob %v from %v failed: %w", h, b.packID.Str(), err)
@ -1161,7 +1186,7 @@ func (b *PackBlobIterator) Next() (PackBlobValue, error) {
}
}
return PackBlobValue{entry.BlobHandle, plaintext, err}, nil
return packBlobValue{entry.BlobHandle, plaintext, err}, nil
}
var zeroChunkOnce sync.Once

View File

@ -146,14 +146,14 @@ func TestStreamPack(t *testing.T) {
}
func testStreamPack(t *testing.T, version uint) {
// always use the same key for deterministic output
const jsonKey = `{"mac":{"k":"eQenuI8adktfzZMuC8rwdA==","r":"k8cfAly2qQSky48CQK7SBA=="},"encrypt":"MKO9gZnRiQFl8mDUurSDa9NMjiu9MUifUrODTHS05wo="}`
var key crypto.Key
err := json.Unmarshal([]byte(jsonKey), &key)
dec, err := zstd.NewReader(nil)
if err != nil {
t.Fatal(err)
panic(dec)
}
defer dec.Close()
// always use the same key for deterministic output
key := testKey(t)
blobSizes := []int{
5522811,
@ -276,7 +276,7 @@ func testStreamPack(t *testing.T, version uint) {
loadCalls = 0
shortFirstLoad = test.shortFirstLoad
err = streamPack(ctx, load, &key, restic.ID{}, test.blobs, handleBlob)
err := streamPack(ctx, load, nil, dec, &key, restic.ID{}, test.blobs, handleBlob)
if err != nil {
t.Fatal(err)
}
@ -339,7 +339,7 @@ func testStreamPack(t *testing.T, version uint) {
return err
}
err = streamPack(ctx, load, &key, restic.ID{}, test.blobs, handleBlob)
err := streamPack(ctx, load, nil, dec, &key, restic.ID{}, test.blobs, handleBlob)
if err == nil {
t.Fatalf("wanted error %v, got nil", test.err)
}
@ -353,7 +353,7 @@ func testStreamPack(t *testing.T, version uint) {
}
func TestBlobVerification(t *testing.T) {
repo := TestRepository(t).(*Repository)
repo := TestRepository(t)
type DamageType string
const (
@ -402,7 +402,7 @@ func TestBlobVerification(t *testing.T) {
}
func TestUnpackedVerification(t *testing.T) {
repo := TestRepository(t).(*Repository)
repo := TestRepository(t)
type DamageType string
const (
@ -449,3 +449,83 @@ func TestUnpackedVerification(t *testing.T) {
}
}
}
func testKey(t *testing.T) crypto.Key {
const jsonKey = `{"mac":{"k":"eQenuI8adktfzZMuC8rwdA==","r":"k8cfAly2qQSky48CQK7SBA=="},"encrypt":"MKO9gZnRiQFl8mDUurSDa9NMjiu9MUifUrODTHS05wo="}`
var key crypto.Key
err := json.Unmarshal([]byte(jsonKey), &key)
if err != nil {
t.Fatal(err)
}
return key
}
func TestStreamPackFallback(t *testing.T) {
dec, err := zstd.NewReader(nil)
if err != nil {
panic(dec)
}
defer dec.Close()
test := func(t *testing.T, failLoad bool) {
key := testKey(t)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
plaintext := rtest.Random(800, 42)
blobID := restic.Hash(plaintext)
blobs := []restic.Blob{
{
Length: uint(crypto.CiphertextLength(len(plaintext))),
Offset: 0,
BlobHandle: restic.BlobHandle{
ID: blobID,
Type: restic.DataBlob,
},
},
}
var loadPack backendLoadFn
if failLoad {
loadPack = func(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
return errors.New("load error")
}
} else {
loadPack = func(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
// just return an empty array to provoke an error
data := make([]byte, length)
return fn(bytes.NewReader(data))
}
}
loadBlob := func(ctx context.Context, t restic.BlobType, id restic.ID, buf []byte) ([]byte, error) {
if id == blobID {
return plaintext, nil
}
return nil, errors.New("unknown blob")
}
blobOK := false
handleBlob := func(blob restic.BlobHandle, buf []byte, err error) error {
rtest.OK(t, err)
rtest.Equals(t, blobID, blob.ID)
rtest.Equals(t, plaintext, buf)
blobOK = true
return err
}
err := streamPack(ctx, loadPack, loadBlob, dec, &key, restic.ID{}, blobs, handleBlob)
rtest.OK(t, err)
rtest.Assert(t, blobOK, "blob failed to load")
}
t.Run("corrupted blob", func(t *testing.T) {
test(t, false)
})
// test fallback for failed pack loading
t.Run("failed load", func(t *testing.T) {
test(t, true)
})
}

View File

@ -9,16 +9,20 @@ import (
"math/rand"
"os"
"path/filepath"
"strings"
"sync"
"testing"
"time"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/backend/local"
"github.com/restic/restic/internal/backend/mem"
"github.com/restic/restic/internal/cache"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/test"
rtest "github.com/restic/restic/internal/test"
"golang.org/x/sync/errgroup"
)
@ -41,7 +45,7 @@ func testSaveCalculateID(t *testing.T, version uint) {
}
func testSave(t *testing.T, version uint, calculateID bool) {
repo := repository.TestRepositoryWithVersion(t, version)
repo, _ := repository.TestRepositoryWithVersion(t, version)
for _, size := range testSizes {
data := make([]byte, size)
@ -84,7 +88,7 @@ func BenchmarkSaveAndEncrypt(t *testing.B) {
}
func benchmarkSaveAndEncrypt(t *testing.B, version uint) {
repo := repository.TestRepositoryWithVersion(t, version)
repo, _ := repository.TestRepositoryWithVersion(t, version)
size := 4 << 20 // 4MiB
data := make([]byte, size)
@ -110,7 +114,7 @@ func TestLoadBlob(t *testing.T) {
}
func testLoadBlob(t *testing.T, version uint) {
repo := repository.TestRepositoryWithVersion(t, version)
repo, _ := repository.TestRepositoryWithVersion(t, version)
length := 1000000
buf := crypto.NewBlobBuffer(length)
_, err := io.ReadFull(rnd, buf)
@ -139,12 +143,34 @@ func testLoadBlob(t *testing.T, version uint) {
}
}
func TestLoadBlobBroken(t *testing.T) {
be := mem.New()
repo, _ := repository.TestRepositoryWithBackend(t, &damageOnceBackend{Backend: be}, restic.StableRepoVersion, repository.Options{})
buf := test.Random(42, 1000)
var wg errgroup.Group
repo.StartPackUploader(context.TODO(), &wg)
id, _, _, err := repo.SaveBlob(context.TODO(), restic.TreeBlob, buf, restic.ID{}, false)
rtest.OK(t, err)
rtest.OK(t, repo.Flush(context.Background()))
// setup cache after saving the blob to make sure that the damageOnceBackend damages the cached data
c := cache.TestNewCache(t)
repo.UseCache(c)
data, err := repo.LoadBlob(context.TODO(), restic.TreeBlob, id, nil)
rtest.OK(t, err)
rtest.Assert(t, bytes.Equal(buf, data), "data mismatch")
pack := repo.Index().Lookup(restic.BlobHandle{Type: restic.TreeBlob, ID: id})[0].PackID
rtest.Assert(t, c.Has(backend.Handle{Type: restic.PackFile, Name: pack.String()}), "expected tree pack to be cached")
}
func BenchmarkLoadBlob(b *testing.B) {
repository.BenchmarkAllVersions(b, benchmarkLoadBlob)
}
func benchmarkLoadBlob(b *testing.B, version uint) {
repo := repository.TestRepositoryWithVersion(b, version)
repo, _ := repository.TestRepositoryWithVersion(b, version)
length := 1000000
buf := crypto.NewBlobBuffer(length)
_, err := io.ReadFull(rnd, buf)
@ -185,7 +211,7 @@ func BenchmarkLoadUnpacked(b *testing.B) {
}
func benchmarkLoadUnpacked(b *testing.B, version uint) {
repo := repository.TestRepositoryWithVersion(b, version)
repo, _ := repository.TestRepositoryWithVersion(b, version)
length := 1000000
buf := crypto.NewBlobBuffer(length)
_, err := io.ReadFull(rnd, buf)
@ -221,7 +247,7 @@ func benchmarkLoadUnpacked(b *testing.B, version uint) {
var repoFixture = filepath.Join("testdata", "test-repo.tar.gz")
func TestRepositoryLoadIndex(t *testing.T) {
repo, cleanup := repository.TestFromFixture(t, repoFixture)
repo, _, cleanup := repository.TestFromFixture(t, repoFixture)
defer cleanup()
rtest.OK(t, repo.LoadIndex(context.TODO(), nil))
@ -242,7 +268,7 @@ func loadIndex(ctx context.Context, repo restic.LoaderUnpacked, id restic.ID) (*
}
func TestRepositoryLoadUnpackedBroken(t *testing.T) {
repo := repository.TestRepository(t)
repo, be := repository.TestRepositoryWithVersion(t, 0)
data := rtest.Random(23, 12345)
id := restic.Hash(data)
@ -251,19 +277,16 @@ func TestRepositoryLoadUnpackedBroken(t *testing.T) {
data[0] ^= 0xff
// store broken file
err := repo.Backend().Save(context.TODO(), h, backend.NewByteReader(data, repo.Backend().Hasher()))
err := be.Save(context.TODO(), h, backend.NewByteReader(data, be.Hasher()))
rtest.OK(t, err)
// without a retry backend this will just return an error that the file is broken
_, err = repo.LoadUnpacked(context.TODO(), restic.IndexFile, id)
if err == nil {
t.Fatal("missing expected error")
}
rtest.Assert(t, strings.Contains(err.Error(), "invalid data returned"), "unexpected error: %v", err)
rtest.Assert(t, errors.Is(err, restic.ErrInvalidData), "unexpected error: %v", err)
}
type damageOnceBackend struct {
backend.Backend
m sync.Map
}
func (be *damageOnceBackend) Load(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
@ -271,13 +294,14 @@ func (be *damageOnceBackend) Load(ctx context.Context, h backend.Handle, length
if h.Type == restic.ConfigFile {
return be.Backend.Load(ctx, h, length, offset, fn)
}
// return broken data on the first try
err := be.Backend.Load(ctx, h, length+1, offset, fn)
if err != nil {
// retry
err = be.Backend.Load(ctx, h, length, offset, fn)
h.IsMetadata = false
_, isRetry := be.m.LoadOrStore(h, true)
if !isRetry {
// return broken data on the first try
offset++
}
return err
return be.Backend.Load(ctx, h, length, offset, fn)
}
func TestRepositoryLoadUnpackedRetryBroken(t *testing.T) {
@ -298,7 +322,7 @@ func BenchmarkLoadIndex(b *testing.B) {
func benchmarkLoadIndex(b *testing.B, version uint) {
repository.TestUseLowSecurityKDFParameters(b)
repo := repository.TestRepositoryWithVersion(b, version)
repo, be := repository.TestRepositoryWithVersion(b, version)
idx := index.NewIndex()
for i := 0; i < 5000; i++ {
@ -316,7 +340,7 @@ func benchmarkLoadIndex(b *testing.B, version uint) {
rtest.OK(b, err)
b.Logf("index saved as %v", id.Str())
fi, err := repo.Backend().Stat(context.TODO(), backend.Handle{Type: restic.IndexFile, Name: id.String()})
fi, err := be.Stat(context.TODO(), backend.Handle{Type: restic.IndexFile, Name: id.String()})
rtest.OK(b, err)
b.Logf("filesize is %v", fi.Size)
@ -350,7 +374,7 @@ func TestRepositoryIncrementalIndex(t *testing.T) {
}
func testRepositoryIncrementalIndex(t *testing.T, version uint) {
repo := repository.TestRepositoryWithVersion(t, version).(*repository.Repository)
repo, _ := repository.TestRepositoryWithVersion(t, version)
index.IndexFull = func(*index.Index, bool) bool { return true }
@ -398,3 +422,38 @@ func TestInvalidCompression(t *testing.T) {
_, err = repository.New(nil, repository.Options{Compression: comp})
rtest.Assert(t, err != nil, "missing error")
}
func TestListPack(t *testing.T) {
be := mem.New()
repo, _ := repository.TestRepositoryWithBackend(t, &damageOnceBackend{Backend: be}, restic.StableRepoVersion, repository.Options{})
buf := test.Random(42, 1000)
var wg errgroup.Group
repo.StartPackUploader(context.TODO(), &wg)
id, _, _, err := repo.SaveBlob(context.TODO(), restic.TreeBlob, buf, restic.ID{}, false)
rtest.OK(t, err)
rtest.OK(t, repo.Flush(context.Background()))
// setup cache after saving the blob to make sure that the damageOnceBackend damages the cached data
c := cache.TestNewCache(t)
repo.UseCache(c)
// Forcibly cache pack file
packID := repo.Index().Lookup(restic.BlobHandle{Type: restic.TreeBlob, ID: id})[0].PackID
rtest.OK(t, be.Load(context.TODO(), backend.Handle{Type: restic.PackFile, IsMetadata: true, Name: packID.String()}, 0, 0, func(rd io.Reader) error { return nil }))
// Get size to list pack
var size int64
rtest.OK(t, repo.List(context.TODO(), restic.PackFile, func(id restic.ID, sz int64) error {
if id == packID {
size = sz
}
return nil
}))
blobs, _, err := repo.ListPack(context.TODO(), packID, size)
rtest.OK(t, err)
rtest.Assert(t, len(blobs) == 1 && blobs[0].ID == id, "unexpected blobs in pack: %v", blobs)
rtest.Assert(t, !c.Has(backend.Handle{Type: restic.PackFile, Name: packID.String()}), "tree pack should no longer be cached as ListPack does not set IsMetadata in the backend.Handle")
}

View File

@ -0,0 +1,12 @@
package repository
import (
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/backend/s3"
)
// AsS3Backend extracts the S3 backend from a repository
// TODO remove me once restic 0.17 was released
func AsS3Backend(repo *Repository) *s3.Backend {
return backend.AsBackend[*s3.Backend](repo.be)
}

View File

@ -46,7 +46,7 @@ const testChunkerPol = chunker.Pol(0x3DA3358B4DC173)
// TestRepositoryWithBackend returns a repository initialized with a test
// password. If be is nil, an in-memory backend is used. A constant polynomial
// is used for the chunker and low-security test parameters.
func TestRepositoryWithBackend(t testing.TB, be backend.Backend, version uint, opts Options) restic.Repository {
func TestRepositoryWithBackend(t testing.TB, be backend.Backend, version uint, opts Options) (*Repository, backend.Backend) {
t.Helper()
TestUseLowSecurityKDFParameters(t)
restic.TestDisableCheckPolynomial(t)
@ -69,19 +69,20 @@ func TestRepositoryWithBackend(t testing.TB, be backend.Backend, version uint, o
t.Fatalf("TestRepository(): initialize repo failed: %v", err)
}
return repo
return repo, be
}
// TestRepository returns a repository initialized with a test password on an
// in-memory backend. When the environment variable RESTIC_TEST_REPO is set to
// a non-existing directory, a local backend is created there and this is used
// instead. The directory is not removed, but left there for inspection.
func TestRepository(t testing.TB) restic.Repository {
func TestRepository(t testing.TB) *Repository {
t.Helper()
return TestRepositoryWithVersion(t, 0)
repo, _ := TestRepositoryWithVersion(t, 0)
return repo
}
func TestRepositoryWithVersion(t testing.TB, version uint) restic.Repository {
func TestRepositoryWithVersion(t testing.TB, version uint) (*Repository, backend.Backend) {
t.Helper()
dir := os.Getenv("RESTIC_TEST_REPO")
opts := Options{}
@ -103,15 +104,15 @@ func TestRepositoryWithVersion(t testing.TB, version uint) restic.Repository {
return TestRepositoryWithBackend(t, nil, version, opts)
}
func TestFromFixture(t testing.TB, repoFixture string) (restic.Repository, func()) {
func TestFromFixture(t testing.TB, repoFixture string) (*Repository, backend.Backend, func()) {
repodir, cleanup := test.Env(t, repoFixture)
repo := TestOpenLocal(t, repodir)
repo, be := TestOpenLocal(t, repodir)
return repo, cleanup
return repo, be, cleanup
}
// TestOpenLocal opens a local repository.
func TestOpenLocal(t testing.TB, dir string) restic.Repository {
func TestOpenLocal(t testing.TB, dir string) (*Repository, backend.Backend) {
var be backend.Backend
be, err := local.Open(context.TODO(), local.Config{Path: dir, Connections: 2})
if err != nil {
@ -120,10 +121,10 @@ func TestOpenLocal(t testing.TB, dir string) restic.Repository {
be = retry.New(be, 3, nil, nil)
return TestOpenBackend(t, be)
return TestOpenBackend(t, be), be
}
func TestOpenBackend(t testing.TB, be backend.Backend) restic.Repository {
func TestOpenBackend(t testing.TB, be backend.Backend) *Repository {
repo, err := New(be, Options{})
if err != nil {
t.Fatal(err)

View File

@ -0,0 +1,103 @@
package repository
import (
"context"
"fmt"
"os"
"path/filepath"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/restic"
)
type upgradeRepoV2Error struct {
UploadNewConfigError error
ReuploadOldConfigError error
BackupFilePath string
}
func (err *upgradeRepoV2Error) Error() string {
if err.ReuploadOldConfigError != nil {
return fmt.Sprintf("error uploading config (%v), re-uploading old config filed failed as well (%v), but there is a backup of the config file in %v", err.UploadNewConfigError, err.ReuploadOldConfigError, err.BackupFilePath)
}
return fmt.Sprintf("error uploading config (%v), re-uploaded old config was successful, there is a backup of the config file in %v", err.UploadNewConfigError, err.BackupFilePath)
}
func (err *upgradeRepoV2Error) Unwrap() error {
// consider the original upload error as the primary cause
return err.UploadNewConfigError
}
func upgradeRepository(ctx context.Context, repo *Repository) error {
h := backend.Handle{Type: backend.ConfigFile}
if !repo.be.HasAtomicReplace() {
// remove the original file for backends which do not support atomic overwriting
err := repo.be.Remove(ctx, h)
if err != nil {
return fmt.Errorf("remove config failed: %w", err)
}
}
// upgrade config
cfg := repo.Config()
cfg.Version = 2
err := restic.SaveConfig(ctx, repo, cfg)
if err != nil {
return fmt.Errorf("save new config file failed: %w", err)
}
return nil
}
func UpgradeRepo(ctx context.Context, repo *Repository) error {
if repo.Config().Version != 1 {
return fmt.Errorf("repository has version %v, only upgrades from version 1 are supported", repo.Config().Version)
}
tempdir, err := os.MkdirTemp("", "restic-migrate-upgrade-repo-v2-")
if err != nil {
return fmt.Errorf("create temp dir failed: %w", err)
}
h := backend.Handle{Type: restic.ConfigFile}
// read raw config file and save it to a temp dir, just in case
rawConfigFile, err := repo.LoadRaw(ctx, restic.ConfigFile, restic.ID{})
if err != nil {
return fmt.Errorf("load config file failed: %w", err)
}
backupFileName := filepath.Join(tempdir, "config")
err = os.WriteFile(backupFileName, rawConfigFile, 0600)
if err != nil {
return fmt.Errorf("write config file backup to %v failed: %w", tempdir, err)
}
// run the upgrade
err = upgradeRepository(ctx, repo)
if err != nil {
// build an error we can return to the caller
repoError := &upgradeRepoV2Error{
UploadNewConfigError: err,
BackupFilePath: backupFileName,
}
// try contingency methods, reupload the original file
_ = repo.be.Remove(ctx, h)
err = repo.be.Save(ctx, h, backend.NewByteReader(rawConfigFile, nil))
if err != nil {
repoError.ReuploadOldConfigError = err
}
return repoError
}
_ = os.Remove(backupFileName)
_ = os.Remove(tempdir)
return nil
}

View File

@ -0,0 +1,82 @@
package repository
import (
"context"
"os"
"path/filepath"
"sync"
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/errors"
rtest "github.com/restic/restic/internal/test"
)
func TestUpgradeRepoV2(t *testing.T) {
repo, _ := TestRepositoryWithVersion(t, 1)
if repo.Config().Version != 1 {
t.Fatal("test repo has wrong version")
}
err := UpgradeRepo(context.Background(), repo)
rtest.OK(t, err)
}
type failBackend struct {
backend.Backend
mu sync.Mutex
ConfigFileSavesUntilError uint
}
func (be *failBackend) Save(ctx context.Context, h backend.Handle, rd backend.RewindReader) error {
if h.Type != backend.ConfigFile {
return be.Backend.Save(ctx, h, rd)
}
be.mu.Lock()
if be.ConfigFileSavesUntilError == 0 {
be.mu.Unlock()
return errors.New("failure induced for testing")
}
be.ConfigFileSavesUntilError--
be.mu.Unlock()
return be.Backend.Save(ctx, h, rd)
}
func TestUpgradeRepoV2Failure(t *testing.T) {
be := TestBackend(t)
// wrap backend so that it fails upgrading the config after the initial write
be = &failBackend{
ConfigFileSavesUntilError: 1,
Backend: be,
}
repo, _ := TestRepositoryWithBackend(t, be, 1, Options{})
if repo.Config().Version != 1 {
t.Fatal("test repo has wrong version")
}
err := UpgradeRepo(context.Background(), repo)
if err == nil {
t.Fatal("expected error returned from Apply(), got nil")
}
upgradeErr := err.(*upgradeRepoV2Error)
if upgradeErr.UploadNewConfigError == nil {
t.Fatal("expected upload error, got nil")
}
if upgradeErr.ReuploadOldConfigError == nil {
t.Fatal("expected reupload error, got nil")
}
if upgradeErr.BackupFilePath == "" {
t.Fatal("no backup file path found")
}
rtest.OK(t, os.Remove(upgradeErr.BackupFilePath))
rtest.OK(t, os.Remove(filepath.Dir(upgradeErr.BackupFilePath)))
}

View File

@ -12,12 +12,15 @@ import (
"testing"
"time"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/debug"
)
// UnlockCancelDelay bounds the duration how long lock cleanup operations will wait
// if the passed in context was canceled.
const UnlockCancelDelay time.Duration = 1 * time.Minute
// Lock represents a process locking the repository for an operation.
//
// There are two types of locks: exclusive and non-exclusive. There may be many
@ -36,7 +39,7 @@ type Lock struct {
UID uint32 `json:"uid,omitempty"`
GID uint32 `json:"gid,omitempty"`
repo Repository
repo Unpacked
lockID *ID
}
@ -87,14 +90,14 @@ var ErrRemovedLock = errors.New("lock file was removed in the meantime")
// NewLock returns a new, non-exclusive lock for the repository. If an
// exclusive lock is already held by another process, it returns an error
// that satisfies IsAlreadyLocked.
func NewLock(ctx context.Context, repo Repository) (*Lock, error) {
func NewLock(ctx context.Context, repo Unpacked) (*Lock, error) {
return newLock(ctx, repo, false)
}
// NewExclusiveLock returns a new, exclusive lock for the repository. If
// another lock (normal and exclusive) is already held by another process,
// it returns an error that satisfies IsAlreadyLocked.
func NewExclusiveLock(ctx context.Context, repo Repository) (*Lock, error) {
func NewExclusiveLock(ctx context.Context, repo Unpacked) (*Lock, error) {
return newLock(ctx, repo, true)
}
@ -106,7 +109,7 @@ func TestSetLockTimeout(t testing.TB, d time.Duration) {
waitBeforeLockCheck = d
}
func newLock(ctx context.Context, repo Repository, excl bool) (*Lock, error) {
func newLock(ctx context.Context, repo Unpacked, excl bool) (*Lock, error) {
lock := &Lock{
Time: time.Now(),
PID: os.Getpid(),
@ -137,7 +140,7 @@ func newLock(ctx context.Context, repo Repository, excl bool) (*Lock, error) {
time.Sleep(waitBeforeLockCheck)
if err = lock.checkForOtherLocks(ctx); err != nil {
_ = lock.Unlock()
_ = lock.Unlock(ctx)
return nil, err
}
@ -221,12 +224,15 @@ func (l *Lock) createLock(ctx context.Context) (ID, error) {
}
// Unlock removes the lock from the repository.
func (l *Lock) Unlock() error {
func (l *Lock) Unlock(ctx context.Context) error {
if l == nil || l.lockID == nil {
return nil
}
return l.repo.Backend().Remove(context.TODO(), backend.Handle{Type: LockFile, Name: l.lockID.String()})
ctx, cancel := delayedCancelContext(ctx, UnlockCancelDelay)
defer cancel()
return l.repo.RemoveUnpacked(ctx, LockFile, *l.lockID)
}
var StaleLockTimeout = 30 * time.Minute
@ -267,6 +273,23 @@ func (l *Lock) Stale() bool {
return false
}
func delayedCancelContext(parentCtx context.Context, delay time.Duration) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancel(context.Background())
go func() {
select {
case <-parentCtx.Done():
case <-ctx.Done():
break
}
time.Sleep(delay)
cancel()
}()
return ctx, cancel
}
// Refresh refreshes the lock by creating a new file in the backend with a new
// timestamp. Afterwards the old lock is removed.
func (l *Lock) Refresh(ctx context.Context) error {
@ -286,7 +309,10 @@ func (l *Lock) Refresh(ctx context.Context) error {
oldLockID := l.lockID
l.lockID = &id
return l.repo.Backend().Remove(context.TODO(), backend.Handle{Type: LockFile, Name: oldLockID.String()})
ctx, cancel := delayedCancelContext(ctx, UnlockCancelDelay)
defer cancel()
return l.repo.RemoveUnpacked(ctx, LockFile, *oldLockID)
}
// RefreshStaleLock is an extended variant of Refresh that can also refresh stale lock files.
@ -313,15 +339,19 @@ func (l *Lock) RefreshStaleLock(ctx context.Context) error {
time.Sleep(waitBeforeLockCheck)
exists, err = l.checkExistence(ctx)
ctx, cancel := delayedCancelContext(ctx, UnlockCancelDelay)
defer cancel()
if err != nil {
// cleanup replacement lock
_ = l.repo.Backend().Remove(context.TODO(), backend.Handle{Type: LockFile, Name: id.String()})
_ = l.repo.RemoveUnpacked(ctx, LockFile, id)
return err
}
if !exists {
// cleanup replacement lock
_ = l.repo.Backend().Remove(context.TODO(), backend.Handle{Type: LockFile, Name: id.String()})
_ = l.repo.RemoveUnpacked(ctx, LockFile, id)
return ErrRemovedLock
}
@ -332,7 +362,7 @@ func (l *Lock) RefreshStaleLock(ctx context.Context) error {
oldLockID := l.lockID
l.lockID = &id
return l.repo.Backend().Remove(context.TODO(), backend.Handle{Type: LockFile, Name: oldLockID.String()})
return l.repo.RemoveUnpacked(ctx, LockFile, *oldLockID)
}
func (l *Lock) checkExistence(ctx context.Context) (bool, error) {
@ -390,7 +420,7 @@ func LoadLock(ctx context.Context, repo LoaderUnpacked, id ID) (*Lock, error) {
}
// RemoveStaleLocks deletes all locks detected as stale from the repository.
func RemoveStaleLocks(ctx context.Context, repo Repository) (uint, error) {
func RemoveStaleLocks(ctx context.Context, repo Unpacked) (uint, error) {
var processed uint
err := ForAllLocks(ctx, repo, nil, func(id ID, lock *Lock, err error) error {
if err != nil {
@ -400,7 +430,7 @@ func RemoveStaleLocks(ctx context.Context, repo Repository) (uint, error) {
}
if lock.Stale() {
err = repo.Backend().Remove(ctx, backend.Handle{Type: LockFile, Name: id.String()})
err = repo.RemoveUnpacked(ctx, LockFile, id)
if err == nil {
processed++
}
@ -413,10 +443,10 @@ func RemoveStaleLocks(ctx context.Context, repo Repository) (uint, error) {
}
// RemoveAllLocks removes all locks forcefully.
func RemoveAllLocks(ctx context.Context, repo Repository) (uint, error) {
func RemoveAllLocks(ctx context.Context, repo Unpacked) (uint, error) {
var processed uint32
err := ParallelList(ctx, repo, LockFile, repo.Connections(), func(ctx context.Context, id ID, _ int64) error {
err := repo.Backend().Remove(ctx, backend.Handle{Type: LockFile, Name: id.String()})
err := repo.RemoveUnpacked(ctx, LockFile, id)
if err == nil {
atomic.AddUint32(&processed, 1)
}

View File

@ -22,7 +22,7 @@ func TestLock(t *testing.T) {
lock, err := restic.NewLock(context.TODO(), repo)
rtest.OK(t, err)
rtest.OK(t, lock.Unlock())
rtest.OK(t, lock.Unlock(context.TODO()))
}
func TestDoubleUnlock(t *testing.T) {
@ -32,9 +32,9 @@ func TestDoubleUnlock(t *testing.T) {
lock, err := restic.NewLock(context.TODO(), repo)
rtest.OK(t, err)
rtest.OK(t, lock.Unlock())
rtest.OK(t, lock.Unlock(context.TODO()))
err = lock.Unlock()
err = lock.Unlock(context.TODO())
rtest.Assert(t, err != nil,
"double unlock didn't return an error, got %v", err)
}
@ -49,8 +49,8 @@ func TestMultipleLock(t *testing.T) {
lock2, err := restic.NewLock(context.TODO(), repo)
rtest.OK(t, err)
rtest.OK(t, lock1.Unlock())
rtest.OK(t, lock2.Unlock())
rtest.OK(t, lock1.Unlock(context.TODO()))
rtest.OK(t, lock2.Unlock(context.TODO()))
}
type failLockLoadingBackend struct {
@ -66,7 +66,7 @@ func (be *failLockLoadingBackend) Load(ctx context.Context, h backend.Handle, le
func TestMultipleLockFailure(t *testing.T) {
be := &failLockLoadingBackend{Backend: mem.New()}
repo := repository.TestRepositoryWithBackend(t, be, 0, repository.Options{})
repo, _ := repository.TestRepositoryWithBackend(t, be, 0, repository.Options{})
restic.TestSetLockTimeout(t, 5*time.Millisecond)
lock1, err := restic.NewLock(context.TODO(), repo)
@ -75,7 +75,7 @@ func TestMultipleLockFailure(t *testing.T) {
_, err = restic.NewLock(context.TODO(), repo)
rtest.Assert(t, err != nil, "unreadable lock file did not result in an error")
rtest.OK(t, lock1.Unlock())
rtest.OK(t, lock1.Unlock(context.TODO()))
}
func TestLockExclusive(t *testing.T) {
@ -83,7 +83,7 @@ func TestLockExclusive(t *testing.T) {
elock, err := restic.NewExclusiveLock(context.TODO(), repo)
rtest.OK(t, err)
rtest.OK(t, elock.Unlock())
rtest.OK(t, elock.Unlock(context.TODO()))
}
func TestLockOnExclusiveLockedRepo(t *testing.T) {
@ -99,8 +99,8 @@ func TestLockOnExclusiveLockedRepo(t *testing.T) {
rtest.Assert(t, restic.IsAlreadyLocked(err),
"create normal lock with exclusively locked repo didn't return the correct error")
rtest.OK(t, lock.Unlock())
rtest.OK(t, elock.Unlock())
rtest.OK(t, lock.Unlock(context.TODO()))
rtest.OK(t, elock.Unlock(context.TODO()))
}
func TestExclusiveLockOnLockedRepo(t *testing.T) {
@ -116,8 +116,8 @@ func TestExclusiveLockOnLockedRepo(t *testing.T) {
rtest.Assert(t, restic.IsAlreadyLocked(err),
"create normal lock with exclusively locked repo didn't return the correct error")
rtest.OK(t, lock.Unlock())
rtest.OK(t, elock.Unlock())
rtest.OK(t, lock.Unlock(context.TODO()))
rtest.OK(t, elock.Unlock(context.TODO()))
}
func createFakeLock(repo restic.SaverUnpacked, t time.Time, pid int) (restic.ID, error) {
@ -130,9 +130,8 @@ func createFakeLock(repo restic.SaverUnpacked, t time.Time, pid int) (restic.ID,
return restic.SaveJSONUnpacked(context.TODO(), repo, restic.LockFile, &newLock)
}
func removeLock(repo restic.Repository, id restic.ID) error {
h := backend.Handle{Type: restic.LockFile, Name: id.String()}
return repo.Backend().Remove(context.TODO(), h)
func removeLock(repo restic.RemoverUnpacked, id restic.ID) error {
return repo.RemoveUnpacked(context.TODO(), restic.LockFile, id)
}
var staleLockTests = []struct {
@ -191,13 +190,16 @@ func TestLockStale(t *testing.T) {
}
}
func lockExists(repo restic.Repository, t testing.TB, id restic.ID) bool {
h := backend.Handle{Type: restic.LockFile, Name: id.String()}
_, err := repo.Backend().Stat(context.TODO(), h)
if err != nil && !repo.Backend().IsNotExist(err) {
t.Fatal(err)
}
return err == nil
func lockExists(repo restic.Lister, t testing.TB, lockID restic.ID) bool {
var exists bool
rtest.OK(t, repo.List(context.TODO(), restic.LockFile, func(id restic.ID, size int64) error {
if id == lockID {
exists = true
}
return nil
}))
return exists
}
func TestLockWithStaleLock(t *testing.T) {
@ -294,7 +296,7 @@ func testLockRefresh(t *testing.T, refresh func(lock *restic.Lock) error) {
rtest.OK(t, err)
rtest.Assert(t, lock2.Time.After(time0),
"expected a later timestamp after lock refresh")
rtest.OK(t, lock.Unlock())
rtest.OK(t, lock.Unlock(context.TODO()))
}
func TestLockRefresh(t *testing.T) {
@ -310,7 +312,7 @@ func TestLockRefreshStale(t *testing.T) {
}
func TestLockRefreshStaleMissing(t *testing.T) {
repo := repository.TestRepository(t)
repo, be := repository.TestRepositoryWithVersion(t, 0)
restic.TestSetLockTimeout(t, 5*time.Millisecond)
lock, err := restic.NewLock(context.TODO(), repo)
@ -318,7 +320,7 @@ func TestLockRefreshStaleMissing(t *testing.T) {
lockID := checkSingleLock(t, repo)
// refresh must fail if lock was removed
rtest.OK(t, repo.Backend().Remove(context.TODO(), backend.Handle{Type: restic.LockFile, Name: lockID.String()}))
rtest.OK(t, be.Remove(context.TODO(), backend.Handle{Type: restic.LockFile, Name: lockID.String()}))
time.Sleep(time.Millisecond)
err = lock.RefreshStaleLock(context.TODO())
rtest.Assert(t, err == restic.ErrRemovedLock, "unexpected error, expected %v, got %v", restic.ErrRemovedLock, err)

View File

@ -48,13 +48,15 @@ const (
TypeCreationTime GenericAttributeType = "windows.creation_time"
// TypeFileAttributes is the GenericAttributeType used for storing file attributes for windows files within the generic attributes map.
TypeFileAttributes GenericAttributeType = "windows.file_attributes"
// TypeSecurityDescriptor is the GenericAttributeType used for storing security descriptors including owner, group, discretionary access control list (DACL), system access control list (SACL)) for windows files within the generic attributes map.
TypeSecurityDescriptor GenericAttributeType = "windows.security_descriptor"
// Generic Attributes for other OS types should be defined here.
)
// init is called when the package is initialized. Any new GenericAttributeTypes being created must be added here as well.
func init() {
storeGenericAttributeType(TypeCreationTime, TypeFileAttributes)
storeGenericAttributeType(TypeCreationTime, TypeFileAttributes, TypeSecurityDescriptor)
}
// genericAttributesForOS maintains a map of known genericAttributesForOS to the OSType
@ -719,12 +721,7 @@ func (node *Node) fillExtra(path string, fi os.FileInfo, ignoreXattrListError bo
allowExtended, err := node.fillGenericAttributes(path, fi, stat)
if allowExtended {
// Skip processing ExtendedAttributes if allowExtended is false.
errEx := node.fillExtendedAttributes(path, ignoreXattrListError)
if err == nil {
err = errEx
} else {
debug.Log("Error filling extended attributes for %v at %v : %v", node.Name, path, errEx)
}
err = errors.CombineErrors(err, node.fillExtendedAttributes(path, ignoreXattrListError))
}
return err
}

Some files were not shown because too many files have changed in this diff Show More