Compare commits

...

101 Commits

Author SHA1 Message Date
Fufu Fang 07475660f1
updated LinkTable invalidation 2024-05-11 23:15:32 +01:00
Fufu Fang a5a53442b2
updated the description for refreshing directories 2024-05-11 17:28:23 +01:00
Fufu Fang 9e383ad7a3
Moved linktable freshness check around
Fixed https://github.com/fangfufu/httpdirfs/issues/141
2024-05-11 01:53:30 +01:00
Fufu Fang 127f2194d0
added more comments 2024-05-07 01:44:01 +01:00
Fufu Fang 4fb95ee5a0
attempt to fix codeql 2024-05-06 00:47:22 +01:00
Fufu Fang 720db5aafa
fixed cache system for percentage encoded file in single-file mode 2024-05-06 00:12:03 +01:00
Fufu Fang 28293b5ccd
fixed erroneous error check 2024-05-05 03:14:51 +01:00
Fufu Fang 1a20318654
added more debug statements 2024-05-05 02:55:10 +01:00
Fufu Fang 9a7eabd170
modified debug message 2024-05-05 02:04:31 +01:00
Fufu Fang 01fd2e9559
changed the way debug level works 2024-05-05 02:00:46 +01:00
Fufu Fang be666d72e9
removed semi-colon at the end of a macro 2024-05-05 00:32:00 +01:00
Fufu Fang 1fa3830dec
run through the formatter 2024-05-03 07:39:14 +01:00
Fufu Fang 8aa7c570c8
added a todo note 2024-05-03 07:37:44 +01:00
Fufu Fang 389a657170
improved debug message 2024-05-03 07:33:41 +01:00
Fufu Fang 257bb22e80
Merge branch 'master' into debug 2024-05-03 07:20:08 +01:00
Fufu Fang a299819b7d
fixed a memory leak, improved error handling in cache system 2024-05-03 07:19:24 +01:00
Fufu Fang 3e7d9f0294
start labelling what might be wrong. 2024-05-03 06:44:59 +01:00
Fufu Fang 63455c54cc
initial commit to the debug branch 2024-05-03 06:44:33 +01:00
Fufu Fang d4c7d8c92a
added more debug message 2024-05-03 06:44:01 +01:00
Fufu Fang dfc83d0e1c
improved debug message 2024-05-03 06:24:50 +01:00
Fufu Fang 96a7c248d3
improved debug message 2024-05-03 05:59:09 +01:00
Fufu Fang f92fe4232a
attempt to fix codeQL 2024-05-02 07:07:58 +01:00
Fufu Fang 91351689f1
LinkTable now saves the refresh time 2024-05-02 06:59:22 +01:00
Fufu Fang 1a3f36a92c
Corrected an implementation error and added more comments 2024-05-02 04:45:34 +01:00
Fufu Fang d6d4af0c8c
Update README.md
Fix https://github.com/fangfufu/httpdirfs/issues/136
2024-04-20 01:30:52 +01:00
Fufu Fang f48ee93931
Update README.md 2024-02-01 09:58:05 +00:00
Fufu Fang 983b1edfbd
Updated README 2024-02-01 06:28:36 +00:00
Fufu Fang 707d9b9253
Configure online code scanning tools
- Added .deepsource.toml for Deep Source
- Added configuration for GitHub CodeQL
2024-02-01 02:53:26 +00:00
Fufu Fang 81aac8bb57
fixed spelling, ran through the formatter 2024-01-13 12:31:47 +00:00
Mattias Runge-Broberg 35a213942c
Fix for single file mode not working
- Fix for not sending ranges which exceed the content-length which will result
in an error.
- Fix for byte range being set to 1 byte too large, it should be the end index,
not the size as described in
https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
2024-01-13 12:30:52 +00:00
Fufu Fang 595c6d275e
Remove spurious code
Remote spurious code flagged by 8451da6ac7,
which was introduced by e76b079fe6
Closes https://github.com/fangfufu/httpdirfs/issues/124
2023-10-03 23:10:24 +01:00
chrysn bd33966337 Allow leading `./` segments in links 2023-10-02 23:44:18 +01:00
Jonathan Kamens 29c3eb8f67 Convert build process to use autotools (autoconf, automake, etc.)
This commit converts the build process from a hand-written Makefile
that works on Linux, FreeBSD, and macOS, to an automatically generated
Makefile managed by the autotools toolset.

This incldues:

* Add the compile, config.guess, config.sub, depcomp, install-sh, and
  missing helper scripts that autotools requires to be shipped with
  the package in order for configure to work.
* Rename Makefile to Makefile.am and restructure it for compatibility
  with autotools and specifically with the stuff in our configure
  script.
* Create the configure.ac source file which is turned into the
  configure script.
* Rename Doxyfile to Doxyfile.in so that the source directories can be
  substituted into it at configure time.
* Tweak .gitignore to ignore temporary and output files related to
  autotools.
* Generate Makefile.in, aclocal.m4, and configure using `autoreconf`
  and include them as checked-in source files.

While I can't fully document how autotools works here the basic
workflow is that when you need to make changes to the build, you
update Makefile.am and/or configure.ac as needed, run `autoreconf`,
and commit the changes you made as well as any resulting changes to
Makefile.in, aclocal.m4, and configure. Makefile should _not_ be
committed into the source tree; it should always be generated using
configure on the system where the build is being run.
2023-09-29 23:45:47 +01:00
Jonathan Kamens ed93a133df Fix minor logic bug and code smell in make_link_relative
Don't assume that the reason why we didn't find enough slashes in a
URL is because the user didn't specify the slash at the end of the
host name, unless we did find the first two slashes.

Add some curly braces around an if block to make it clear to people
and the compiler which statement an `else` applies to. The logic was
correct before but the indentation was wrong, making it especially
confusing.
2023-09-29 23:45:47 +01:00
Jonathan Kamens 7bcd43068d Fix broken curl HTTP response code check
The check for the HTTP response code from the curl library was written
incorrectly and guaranteed to always fail. I've fixed the logic to
reflect what I believe was intended.
2023-09-29 23:45:47 +01:00
Jonathan Kamens ab49ca76b6 Add missing return value check for fread call 2023-09-29 23:45:47 +01:00
Jonathan Kamens 8451da6ac7 Comment out small block of code that doesn't do anything
There's a small block of code that calls strnlen on a string, saves
the esult in a variable, conditionally decrements the variable, and
then does nothing with it, making the entire block of code a no-op.

I don't want to just remove it entirely since it's possible that there
was intended to be some sort of check here that was inadvertently
omitted. So to make the compiler stop complaining I've commented out
the code, but I've left a comment above it explaining why it was
commented out and pointing out that maybe something different needs to
be done with it.
2023-09-29 23:45:47 +01:00
Jonathan Kamens e253b4a9ee Eliminate some compiler warnings 2023-09-29 23:45:47 +01:00
Jonathan Kamens 8f0ef158c0 Remove spurious arguments to print_version() 2023-09-29 23:45:47 +01:00
Jonathan Kamens c532661d29 Add missing error-checking for return value of fread
Several calls to fread were missing checks to ensure that the expected
amount of data was read.
2023-09-29 23:45:47 +01:00
Jonathan Kamens 7363adaf12 Handle sites that put unencoded characters in URLs that curl dislikes
Some sites put unencoded characters in their href attributes that
really should be encoded, most notably spaces. Curl won't accept a URL
with a space in it, and perhaps other such characters as well. Address
this by properly encoding characters in URLs before feeding them to
Curl.
2023-09-29 12:47:55 +01:00
Jonathan Kamens e94b5441f3 Add a few more debug messages to help trace program execution 2023-09-29 12:47:55 +01:00
Jonathan Kamens 3beccd2c2d Enabling debugging on command line should enable debug logging
I believe an appropriate expectation is that if the user enables
debugging with a command-line flag, then that should also enable
messagse designated as debug messages in the code to be printed.
2023-09-29 12:47:55 +01:00
Jonathan Kamens 4d323b846f Do the right thing with sites that use absolute links
On some sites, the link to each subfolder is an absolute link rather
than a relative one. To accommodate this, convert the links from
absolute to relative before storing them in the link table.
2023-09-29 12:47:55 +01:00
Jonathan Kamens 41cb4b80bc Do the right thing with sites that require the final slash
Some web sites will return 404 if you fetch a directory without the
final slash. For example, https://archive.mozilla.org/pub/ works,
https://archive.mozilla.org/pub does not. We need to do two things to
accommodate this:

* When processing the root URL of the filesystem, instead of stripping
  off the final slash, just set the offset to ignore it.
* In the link structure, store the actual URL tail of the link
  separately from its name, final slash and all if there is one, and
  append that instead of the name when constructing the URL for curl.
2023-09-29 12:47:55 +01:00
Fufu Fang 1e80844831 ran the code through formatter 2023-07-26 07:48:33 +08:00
Fufu Fang 6d8db94458 minor formatting changes for PR #114 2023-07-26 07:48:22 +08:00
Fufu Fang 282605b0ac fix: changed deprecated libcurl call 2023-07-25 14:57:08 +08:00
Mike Morrison a309994b9e
Add setting to refresh directory contents (#114)
Refresh a directory's contents when fs_readdir is called
if it has been more than the number of seconds specified by
--refresh_timeout since the directory was last indexed.
2023-03-31 13:26:15 +01:00
Kian-Meng Ang 9a7016f29b
Fix typos (#117)
Found via `codespell`
2023-03-28 05:00:07 +01:00
Fufu Fang 8479feb2f6
Bumped version number to 1.2.5 for Debian release 2023-02-24 19:47:23 +00:00
Fufu Fang fe45afc6a1
Remove the usage of UBSAN
Address issue #113. Use of UBSAN in runtime could introduce
vulnerabilities.

Original bug report:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1031744

Reference:
https://www.openwall.com/lists/oss-security/2016/02/17/9
2023-02-23 01:44:18 +00:00
Jérôme Charaoui e9f60d5221
fix typo 2023-01-28 12:02:31 -05:00
Jérôme Charaoui 74fac1dce0
bump VERSION in Makefile 2023-01-28 12:01:06 -05:00
Fufu Fang 9b72f97bcf
Update README.md 2023-01-14 00:04:12 +00:00
Fufu Fang d91bb2b278
Update CHANGELOG.md 2023-01-11 23:56:19 +00:00
Fufu Fang f26a5bce25
Update CHANGELOG.md 2023-01-11 23:55:20 +00:00
Fufu Fang e6b5688e45
Modified Funkwhale sanitiser scheme 2022-11-06 23:45:13 +00:00
Fufu Fang 3acc093cdd
Merge pull request #106 from rdelaage/funkwhale_ioerror
Fix IO error with funkwhale subsonic API
2022-11-06 23:39:00 +00:00
Fufu Fang bb3b652135
Merge pull request #109 from nwf-msr/master
Add --cacert and --proxy-cacert
2022-11-02 08:19:26 +00:00
Nathaniel Wesley Filardo 12abb7d8ad Add --cacert and --proxy-cacert
Fixes https://github.com/fangfufu/httpdirfs/issues/108
2022-11-01 02:13:27 +00:00
Nathaniel Wesley Filardo ff5f566dd9 Link_download_full: don't FREE(NULL)
It's entirely possible that `ts.data` is `NULL` on an error path, so
handing it to `FREE()`, which bails on a `NULL` argument, is not ideal.
Just pass it to `free()` instead, which is required to no-op if given
`NULL`.
2022-11-01 01:59:03 +00:00
Nathaniel Wesley Filardo 833cbf9d67 Correct error message in FREE().
`FREE()` checks for a `NULL` pointer, but generally httpdirfs does not
`NULL` out pointers it attempts to `FREE()` (or `free()`).  As such, the
error message is misleading; make it less so in a trivial way.

Possibly a better, more invasive, change would be for `FREE()` to take a
`void** pp`, check that `*p != NULL`, `free(*p)`, and then `*p = NULL;`.
Were that done, then there would be some plausibility to the current
diagnostic message.
2022-11-01 01:59:03 +00:00
Romain de Laage abef0c9406
Fix IO error with funkwhale subsonic API 2022-09-23 07:49:36 +02:00
Fufu Fang 61d3ae4166
Merge pull request #104 from nwf-msr/202206-small-fixes
Two small patches
2022-08-12 00:49:03 +01:00
Nathaniel Wesley Filardo 72d15ab6c7 fs_open: return EROFS for non-RO opens
The use of EACCES leads to slightly confusing error messages in
downstream consumers, so prefer EROFS to better articulate what's
actually happening.

While here, use O_RDWR to mask the open flags while testing for
non-RO access.  This is at least encouraged by POSIX with their
suggestion that "O_RDONLY | O_WRONLY == O_RDWR".
2022-06-28 15:00:48 +01:00
Nathaniel Wesley Filardo ffb2658abb getopt_long returns an int, not a char
On platforms with an unsigned char, such as Arm, this results in
always taking error paths around initialization.

Fixes https://github.com/fangfufu/httpdirfs/issues/103
2022-06-28 14:45:31 +01:00
Jérôme Charaoui d1a10d489c add --name option to help2man
This resolves a lintian warning in Debian packaging
(manpage-has-useless-whatis-entry).
2022-04-24 00:27:12 -04:00
Fufu Fang 3b25cf31ef
Merge pull request #101 from moschlar/patch-1
Fix --insecure-tls in help and README
2022-04-23 02:49:50 +01:00
Fufu Fang d2207e7a4e
fixed --version switch 2022-04-23 02:49:16 +01:00
Jérôme Charaoui 66776261ca Remove generated manpage from repo
Packages generate it on the fly.
2022-04-22 12:32:47 -04:00
Moritz Schlarb a6f453c6a8
Update README.md 2022-04-04 15:38:38 +02:00
Moritz Schlarb 4d45525c64
Update main.c 2022-04-04 15:37:36 +02:00
Fufu Fang 40c750fac9 moved the location of error string 2021-09-04 13:37:45 +01:00
Fufu Fang 67edcc906f Clean up for the master branch 2021-09-04 12:41:33 +01:00
Fufu Fang cbe8c83195 stable version for master 2021-09-04 03:15:26 +01:00
Fufu Fang ebcfb0a79e periodic backup 2021-09-04 03:00:25 +01:00
Fufu Fang 5d539c30b1 started writing the ramcache 2021-09-04 01:28:01 +01:00
Fufu Fang 939e287c87 adjusted includes 2021-09-03 21:39:31 +01:00
Fufu Fang 6819ad09e4 removed unnecessary includes 2021-09-03 21:23:52 +01:00
Fufu Fang 7c6433f0cd more refactoring 2021-09-03 17:00:32 +01:00
Fufu Fang 1efe5932cf more refactoring 2021-09-03 16:58:08 +01:00
Fufu Fang ee32ddebc9 simplified network code 2021-09-03 16:36:50 +01:00
Fufu Fang dd8d887f94 more refactoring 2021-09-03 16:29:00 +01:00
Fufu Fang d403fa339b minor refactoring 2021-09-03 15:41:22 +01:00
Fufu Fang cd6bb5bee8 more refactoring 2021-09-03 14:56:11 +01:00
Fufu Fang bc88a681e3 check return for curl_easy_setopt, also new libcurl debug level 2021-09-03 12:57:52 +01:00
Fufu Fang 08eb04fb0e refactoring - now check return code from curl_easy_getinfo 2021-09-03 12:47:48 +01:00
Fufu Fang c64a139b46 refactoring transfer_blocking 2021-09-03 12:40:35 +01:00
Fufu Fang 177b738522 removed ts_ptr from Link 2021-09-02 16:52:39 +01:00
Fufu Fang d7086c6ecf Now clear the link->cache_ptr after closing the cache 2021-09-02 16:24:55 +01:00
Fufu Fang b96ed88bec improved debug statements 2021-09-02 16:07:39 +01:00
Fufu Fang 2d42313e8f compiles, but not running properly 2021-09-02 15:36:53 +01:00
Fufu Fang 31f8509f42 moved the *sonic related fields into a separate struct 2021-09-01 21:29:13 +01:00
Fufu Fang e7f06285df improved Makefile, fixed potential memory leak at Data_create 2021-09-01 12:34:53 +01:00
Fufu Fang 86003d2b6a Meta_create() now calls fclose itself 2021-09-01 12:19:20 +01:00
Fufu Fang 464c8e4863 Merged transfer status struct and transfer data struct 2021-09-01 11:56:18 +01:00
Fufu Fang a76366c481 improved error handling in path_download 2021-09-01 11:03:27 +01:00
Fufu Fang 8f9935ee5d moved cache_opened to cache.h 2021-09-01 10:39:33 +01:00
Fufu Fang 95b86825ed Added minimum transfer size in TransferDataStruct 2021-09-01 03:53:19 +01:00
Fufu Fang 08c1eeba49 added initial debug statements 2021-08-31 21:30:24 +01:00
37 changed files with 15125 additions and 967 deletions

4
.deepsource.toml Normal file
View File

@ -0,0 +1,4 @@
version = 1
[[analyzers]]
name = "cxx"

91
.github/workflows/codeql.yml vendored Normal file
View File

@ -0,0 +1,91 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
schedule:
- cron: '18 19 * * 1'
jobs:
analyze:
name: Analyze
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners
# Consider using larger runners for possible analysis time improvements.
runs-on: 'ubuntu-latest'
timeout-minutes: 360
permissions:
# required for all workflows
security-events: write
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
language: [ 'c-cpp' ]
# CodeQL supports [ 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift' ]
# Use only 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use only 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libgumbo-dev libfuse-dev libssl-dev \
libcurl4-openssl-dev uuid-dev help2man libexpat1-dev pkg-config \
autoconf
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

16
.gitignore vendored
View File

@ -1,6 +1,5 @@
# Binaries
httpdirfs
sonicfs
# Intermediates
*.o
@ -14,3 +13,18 @@ doc/html
.vscode
*.c~
*.h~
# autotools
autom4te.cache
#Others
mnt
# Generated files
Doxyfile
Makefile
config.log
config.status
doc
src/.deps
src/.dirstamp

View File

@ -6,6 +6,36 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Fixed
- The refreshed LinkTable is now saved
(https://github.com/fangfufu/httpdirfs/issues/141).
- Only one LinkTable of the same directory is created when the cache mode is
enabled (https://github.com/fangfufu/httpdirfs/issues/140).
- Cache mode noe works correctly witht escaped URL
(https://github.com/fangfufu/httpdirfs/issues/138).
## Changed
- Improved LinkTable caching. LinkTable invalidation is now purely based on
timeout.
## [1.2.5] - 2023-02-24
### Fixed
- No longer compile with UBSAN enabled by default to avoid introducing
security vulnerability.
## [1.2.4] - 2023-01-11
### Added
- Add ``--cacert`` and ``--proxy-cacert`` options
### Fixed
- ``Link_download_full``: don't ``FREE(NULL)``
- Correct error message in ``FREE()``
- Error handling for ``fs_open`` and ``getopt_long``
- Fix IO error with funkwhale subsonic API
- Fix ``--insecure-tls`` in help and README
## [1.2.3] - 2021-08-31
### Added
@ -200,7 +230,9 @@ ${XDG_CONFIG_HOME}/httpdirfs, rather than ${HOME}/.httpdirfs
## [1.0] - 2018-08-22
- Initial release, everything works correctly, as far as I know.
[Unreleased]: https://github.com/fangfufu/httpdirfs/compare/1.2.3...master
[Unreleased]: https://github.com/fangfufu/httpdirfs/compare/1.2.5...master
[1.2.5]: https://github.com/fangfufu/httpdirfs/compare/1.2.4...1.2.5
[1.2.4]: https://github.com/fangfufu/httpdirfs/compare/1.2.3...1.2.4
[1.2.3]: https://github.com/fangfufu/httpdirfs/compare/1.2.2...1.2.3
[1.2.2]: https://github.com/fangfufu/httpdirfs/compare/1.2.1...1.2.2
[1.2.1]: https://github.com/fangfufu/httpdirfs/compare/1.2.0...1.2.1

View File

@ -790,8 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = . \
src
INPUT = @srcdir@ @srcdir@/src
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
@ -901,7 +900,7 @@ EXCLUDE_PATTERNS =
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories use the pattern */test/*
EXCLUDE_SYMBOLS = CALLOC lprintf
EXCLUDE_SYMBOLS = CALLOC lprintf FREE
# The EXAMPLE_PATH tag can be used to specify one or more files or directories
# that contain example code fragments that are included (see the \include

View File

@ -1,95 +0,0 @@
VERSION = 1.2.3
CFLAGS += -O2 -Wall -Wextra -Wshadow -rdynamic -D_GNU_SOURCE\
-D_FILE_OFFSET_BITS=64 -DVERSION=\"$(VERSION)\"\
`pkg-config --cflags-only-I gumbo libcurl fuse uuid expat`
LDFLAGS += `pkg-config --libs-only-L gumbo libcurl fuse uuid expat`
LIBS = -pthread -lgumbo -lcurl -lfuse -lcrypto -lexpat
COBJS = main.o network.o fuse_local.o link.o cache.o util.o sonic.o log.o\
config.o
OS := $(shell uname)
ifeq ($(OS),Darwin)
BREW_PREFIX := $(shell brew --prefix)
CFLAGS += -I$(BREW_PREFIX)/opt/openssl/include \
-I$(BREW_PREFIX)/opt/curl/include
LDFLAGS += -L$(BREW_PREFIX)/opt/openssl/lib \
-L$(BREW_PREFIX)/opt/curl/lib
else
LIBS += -luuid
endif
ifeq ($(OS),FreeBSD)
LIBS += -lexecinfo
endif
prefix ?= /usr/local
all: httpdirfs
%.o: src/%.c
$(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
httpdirfs: $(COBJS)
$(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
install:
ifeq ($(OS),Linux)
install -m 755 -D httpdirfs \
$(DESTDIR)$(prefix)/bin/httpdirfs
install -m 644 -D doc/man/httpdirfs.1 \
$(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
ifeq ($(OS),FreeBSD)
install -m 755 httpdirfs \
$(DESTDIR)$(prefix)/bin/httpdirfs
gzip -f -k doc/man/httpdirfs.1
install -m 644 doc/man/httpdirfs.1.gz \
$(DESTDIR)$(prefix)/man/man1/httpdirfs.1.gz
endif
ifeq ($(OS),Darwin)
install -d $(DESTDIR)$(prefix)/bin
install -m 755 httpdirfs \
$(DESTDIR)$(prefix)/bin/httpdirfs
install -d $(DESTDIR)$(prefix)/share/man/man1
install -m 644 doc/man/httpdirfs.1 \
$(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
man: httpdirfs
help2man --no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
indent -kr -nut src/*.c src/*.h
clean:
-rm -f src/*.h~
-rm -f src/*.c~
-rm -f *.o
-rm -f httpdirfs
distclean: clean
-rm -rf doc/html
-rm -rf doc/man/httpdirfs.1
uninstall:
-rm -f $(DESTDIR)$(prefix)/bin/httpdirfs
ifeq ($(OS),Linux)
-rm -f $(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
ifeq ($(OS),FreeBSD)
-rm -f $(DESTDIR)$(prefix)/man/man1/httpdirfs.1.gz
endif
ifeq ($(OS),Darwin)
-rm -f $(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
depend: .depend
.depend: src/*.c
rm -f ./.depend
$(CC) $(CFLAGS) -MM $^ -MF ./.depend;
include .depend
.PHONY: all man doc install clean distclean uninstall depend format

36
Makefile.am Normal file
View File

@ -0,0 +1,36 @@
bin_PROGRAMS = httpdirfs
httpdirfs_SOURCES = src/main.c src/network.c src/fuse_local.c src/link.c \
src/cache.c src/util.c src/sonic.c src/log.c src/config.c src/memcache.c
# This has $(fuse_LIBS) in it because there's a bug in the fuse pkgconf:
# it should add -pthread to CFLAGS but doesn't.
# $(NUCLA) is explained in configure.ac.
CFLAGS = -g -O2 -Wall -Wextra -Wshadow $(NUCLA) \
-rdynamic -D_GNU_SOURCE -DVERSION=\"$(VERSION)\"\
$(pkgconf_CFLAGS) $(fuse_CFLAGS) $(fuse_LIBS)
LIBS += $(pkgconf_LIBS) $(fuse_LIBS)
man_MANS = doc/man/httpdirfs.1
CLEANFILES = doc/man/*
DISTCLEANFILES = doc/html/*
# %.o: $(srcdir)/src/%.c
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
# httpdirfs: $(COBJS)
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
man: doc/man/httpdirfs.1
doc/man/httpdirfs.1: httpdirfs
mkdir -p doc/man
rm -f doc/man/httpdirfs.1.tmp
help2man --name "mount HTTP directory as a virtual filesystem" \
--no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1.tmp
mv doc/man/httpdirfs.1.tmp doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
astyle --style=kr --align-pointer=name --max-code-length=80 src/*.c src/*.h
.PHONY: man doc format

934
Makefile.in Normal file
View File

@ -0,0 +1,934 @@
# Makefile.in generated by automake 1.16.5 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2021 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
am__is_gnu_make = { \
if test -z '$(MAKELEVEL)'; then \
false; \
elif test -n '$(MAKE_HOST)'; then \
true; \
elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
true; \
else \
false; \
fi; \
}
am__make_running_with_option = \
case $${target_option-} in \
?) ;; \
*) echo "am__make_running_with_option: internal error: invalid" \
"target option '$${target_option-}' specified" >&2; \
exit 1;; \
esac; \
has_opt=no; \
sane_makeflags=$$MAKEFLAGS; \
if $(am__is_gnu_make); then \
sane_makeflags=$$MFLAGS; \
else \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
bs=\\; \
sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
| sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
esac; \
fi; \
skip_next=no; \
strip_trailopt () \
{ \
flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
}; \
for flg in $$sane_makeflags; do \
test $$skip_next = yes && { skip_next=no; continue; }; \
case $$flg in \
*=*|--*) continue;; \
-*I) strip_trailopt 'I'; skip_next=yes;; \
-*I?*) strip_trailopt 'I';; \
-*O) strip_trailopt 'O'; skip_next=yes;; \
-*O?*) strip_trailopt 'O';; \
-*l) strip_trailopt 'l'; skip_next=yes;; \
-*l?*) strip_trailopt 'l';; \
-[dEDm]) skip_next=yes;; \
-[JT]) skip_next=yes;; \
esac; \
case $$flg in \
*$$target_option*) has_opt=yes; break;; \
esac; \
done; \
test $$has_opt = yes
am__make_dryrun = (target_option=n; $(am__make_running_with_option))
am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
bin_PROGRAMS = httpdirfs$(EXEEXT)
subdir = .
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \
$(am__configure_deps) $(am__DIST_COMMON)
am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
configure.lineno config.status.lineno
mkinstalldirs = $(install_sh) -d
CONFIG_CLEAN_FILES = Doxyfile
CONFIG_CLEAN_VPATH_FILES =
am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)"
PROGRAMS = $(bin_PROGRAMS)
am__dirstamp = $(am__leading_dot)dirstamp
am_httpdirfs_OBJECTS = src/main.$(OBJEXT) src/network.$(OBJEXT) \
src/fuse_local.$(OBJEXT) src/link.$(OBJEXT) \
src/cache.$(OBJEXT) src/util.$(OBJEXT) src/sonic.$(OBJEXT) \
src/log.$(OBJEXT) src/config.$(OBJEXT) src/memcache.$(OBJEXT)
httpdirfs_OBJECTS = $(am_httpdirfs_OBJECTS)
httpdirfs_LDADD = $(LDADD)
AM_V_P = $(am__v_P_@AM_V@)
am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
am__v_P_0 = false
am__v_P_1 = :
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
am__v_GEN_1 =
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
am__v_at_1 =
DEFAULT_INCLUDES = -I.@am__isrc@
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__maybe_remake_depfiles = depfiles
am__depfiles_remade = src/$(DEPDIR)/cache.Po src/$(DEPDIR)/config.Po \
src/$(DEPDIR)/fuse_local.Po src/$(DEPDIR)/link.Po \
src/$(DEPDIR)/log.Po src/$(DEPDIR)/main.Po \
src/$(DEPDIR)/memcache.Po src/$(DEPDIR)/network.Po \
src/$(DEPDIR)/sonic.Po src/$(DEPDIR)/util.Po
am__mv = mv -f
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
AM_V_CC = $(am__v_CC_@AM_V@)
am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@)
am__v_CC_0 = @echo " CC " $@;
am__v_CC_1 =
CCLD = $(CC)
LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@
AM_V_CCLD = $(am__v_CCLD_@AM_V@)
am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@)
am__v_CCLD_0 = @echo " CCLD " $@;
am__v_CCLD_1 =
SOURCES = $(httpdirfs_SOURCES)
DIST_SOURCES = $(httpdirfs_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
am__install_max = 40
am__nobase_strip_setup = \
srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
am__nobase_strip = \
for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
am__nobase_list = $(am__nobase_strip_setup); \
for p in $$list; do echo "$$p $$p"; done | \
sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
$(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
if (++n[$$2] == $(am__install_max)) \
{ print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
END { for (dir in files) print dir, files[dir] }'
am__base_list = \
sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
am__uninstall_files_from_dir = { \
test -z "$$files" \
|| { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
|| { echo " ( cd '$$dir' && rm -f" $$files ")"; \
$(am__cd) "$$dir" && rm -f $$files; }; \
}
man1dir = $(mandir)/man1
NROFF = nroff
MANS = $(man_MANS)
am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
# Read a list of newline-separated strings from the standard input,
# and print each of them once, without duplicates. Input order is
# *not* preserved.
am__uniquify_input = $(AWK) '\
BEGIN { nonempty = 0; } \
{ items[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in items) print i; }; } \
'
# Make sure the list of sources is unique. This is necessary because,
# e.g., the same source file might be shared among _SOURCES variables
# for different programs/libraries.
am__define_uniq_tagged_files = \
list='$(am__tagged_files)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | $(am__uniquify_input)`
AM_RECURSIVE_TARGETS = cscope
am__DIST_COMMON = $(srcdir)/Doxyfile.in $(srcdir)/Makefile.in \
README.md compile config.guess config.sub depcomp install-sh \
missing
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
distdir = $(PACKAGE)-$(VERSION)
top_distdir = $(distdir)
am__remove_distdir = \
if test -d "$(distdir)"; then \
find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \
&& rm -rf "$(distdir)" \
|| { sleep 5 && rm -rf "$(distdir)"; }; \
else :; fi
am__post_remove_distdir = $(am__remove_distdir)
DIST_ARCHIVES = $(distdir).tar.gz
GZIP_ENV = --best
DIST_TARGETS = dist-gzip
# Exists only to be overridden by the user if desired.
AM_DISTCHECK_DVI_TARGET = dvi
distuninstallcheck_listfiles = find . -type f -print
am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
| sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
distcleancheck_listfiles = find . -type f -print
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
# This has $(fuse_LIBS) in it because there's a bug in the fuse pkgconf:
# it should add -pthread to CFLAGS but doesn't.
# $(NUCLA) is explained in configure.ac.
CFLAGS = -g -O2 -Wall -Wextra -Wshadow $(NUCLA) \
-rdynamic -D_GNU_SOURCE -DVERSION=\"$(VERSION)\"\
$(pkgconf_CFLAGS) $(fuse_CFLAGS) $(fuse_LIBS)
CPPFLAGS = @CPPFLAGS@
CSCOPE = @CSCOPE@
CTAGS = @CTAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
ETAGS = @ETAGS@
EXEEXT = @EXEEXT@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@ $(pkgconf_LIBS) $(fuse_LIBS)
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
MKDIR_P = @MKDIR_P@
NUCLA = @NUCLA@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
VERSION = @VERSION@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_CC = @ac_ct_CC@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
fuse_CFLAGS = @fuse_CFLAGS@
fuse_LIBS = @fuse_LIBS@
host_alias = @host_alias@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
pkgconf_CFLAGS = @pkgconf_CFLAGS@
pkgconf_LIBS = @pkgconf_LIBS@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
runstatedir = @runstatedir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
httpdirfs_SOURCES = src/main.c src/network.c src/fuse_local.c src/link.c \
src/cache.c src/util.c src/sonic.c src/log.c src/config.c src/memcache.c
man_MANS = doc/man/httpdirfs.1
CLEANFILES = doc/man/*
DISTCLEANFILES = doc/html/*
all: all-am
.SUFFIXES:
.SUFFIXES: .c .o .obj
am--refresh: Makefile
@:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \
$(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --foreign Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
echo ' $(SHELL) ./config.status'; \
$(SHELL) ./config.status;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \
cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
$(SHELL) ./config.status --recheck
$(top_srcdir)/configure: $(am__configure_deps)
$(am__cd) $(srcdir) && $(AUTOCONF)
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
$(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
$(am__aclocal_m4_deps):
Doxyfile: $(top_builddir)/config.status $(srcdir)/Doxyfile.in
cd $(top_builddir) && $(SHELL) ./config.status $@
install-binPROGRAMS: $(bin_PROGRAMS)
@$(NORMAL_INSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(bindir)'"; \
$(MKDIR_P) "$(DESTDIR)$(bindir)" || exit 1; \
fi; \
for p in $$list; do echo "$$p $$p"; done | \
sed 's/$(EXEEXT)$$//' | \
while read p p1; do if test -f $$p \
; then echo "$$p"; echo "$$p"; else :; fi; \
done | \
sed -e 'p;s,.*/,,;n;h' \
-e 's|.*|.|' \
-e 'p;x;s,.*/,,;s/$(EXEEXT)$$//;$(transform);s/$$/$(EXEEXT)/' | \
sed 'N;N;N;s,\n, ,g' | \
$(AWK) 'BEGIN { files["."] = ""; dirs["."] = 1 } \
{ d=$$3; if (dirs[d] != 1) { print "d", d; dirs[d] = 1 } \
if ($$2 == $$4) files[d] = files[d] " " $$1; \
else { print "f", $$3 "/" $$4, $$1; } } \
END { for (d in files) print "f", d, files[d] }' | \
while read type dir files; do \
if test "$$dir" = .; then dir=; else dir=/$$dir; fi; \
test -z "$$files" || { \
echo " $(INSTALL_PROGRAM_ENV) $(INSTALL_PROGRAM) $$files '$(DESTDIR)$(bindir)$$dir'"; \
$(INSTALL_PROGRAM_ENV) $(INSTALL_PROGRAM) $$files "$(DESTDIR)$(bindir)$$dir" || exit $$?; \
} \
; done
uninstall-binPROGRAMS:
@$(NORMAL_UNINSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
files=`for p in $$list; do echo "$$p"; done | \
sed -e 'h;s,^.*/,,;s/$(EXEEXT)$$//;$(transform)' \
-e 's/$$/$(EXEEXT)/' \
`; \
test -n "$$list" || exit 0; \
echo " ( cd '$(DESTDIR)$(bindir)' && rm -f" $$files ")"; \
cd "$(DESTDIR)$(bindir)" && rm -f $$files
clean-binPROGRAMS:
-test -z "$(bin_PROGRAMS)" || rm -f $(bin_PROGRAMS)
src/$(am__dirstamp):
@$(MKDIR_P) src
@: > src/$(am__dirstamp)
src/$(DEPDIR)/$(am__dirstamp):
@$(MKDIR_P) src/$(DEPDIR)
@: > src/$(DEPDIR)/$(am__dirstamp)
src/main.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/network.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/fuse_local.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/link.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/cache.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/util.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/sonic.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/log.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/config.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/memcache.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
httpdirfs$(EXEEXT): $(httpdirfs_OBJECTS) $(httpdirfs_DEPENDENCIES) $(EXTRA_httpdirfs_DEPENDENCIES)
@rm -f httpdirfs$(EXEEXT)
$(AM_V_CCLD)$(LINK) $(httpdirfs_OBJECTS) $(httpdirfs_LDADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
-rm -f src/*.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/cache.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/config.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/fuse_local.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/link.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/log.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/main.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/memcache.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/network.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/sonic.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/util.Po@am__quote@ # am--include-marker
$(am__depfiles_remade):
@$(MKDIR_P) $(@D)
@echo '# dummy' >$@-t && $(am__mv) $@-t $@
am--depfiles: $(am__depfiles_remade)
.c.o:
@am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\
@am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $<
.c.obj:
@am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.obj$$||'`;\
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ `$(CYGPATH_W) '$<'` &&\
@am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
install-man1: $(man_MANS)
@$(NORMAL_INSTALL)
@list1=''; \
list2='$(man_MANS)'; \
test -n "$(man1dir)" \
&& test -n "`echo $$list1$$list2`" \
|| exit 0; \
echo " $(MKDIR_P) '$(DESTDIR)$(man1dir)'"; \
$(MKDIR_P) "$(DESTDIR)$(man1dir)" || exit 1; \
{ for i in $$list1; do echo "$$i"; done; \
if test -n "$$list2"; then \
for i in $$list2; do echo "$$i"; done \
| sed -n '/\.1[a-z]*$$/p'; \
fi; \
} | while read p; do \
if test -f $$p; then d=; else d="$(srcdir)/"; fi; \
echo "$$d$$p"; echo "$$p"; \
done | \
sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
-e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \
sed 'N;N;s,\n, ,g' | { \
list=; while read file base inst; do \
if test "$$base" = "$$inst"; then list="$$list $$file"; else \
echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man1dir)/$$inst'"; \
$(INSTALL_DATA) "$$file" "$(DESTDIR)$(man1dir)/$$inst" || exit $$?; \
fi; \
done; \
for i in $$list; do echo "$$i"; done | $(am__base_list) | \
while read files; do \
test -z "$$files" || { \
echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man1dir)'"; \
$(INSTALL_DATA) $$files "$(DESTDIR)$(man1dir)" || exit $$?; }; \
done; }
uninstall-man1:
@$(NORMAL_UNINSTALL)
@list=''; test -n "$(man1dir)" || exit 0; \
files=`{ for i in $$list; do echo "$$i"; done; \
l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \
sed -n '/\.1[a-z]*$$/p'; \
} | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
-e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \
dir='$(DESTDIR)$(man1dir)'; $(am__uninstall_files_from_dir)
ID: $(am__tagged_files)
$(am__define_uniq_tagged_files); mkid -fID $$unique
tags: tags-am
TAGS: tags
tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
set x; \
here=`pwd`; \
$(am__define_uniq_tagged_files); \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: ctags-am
CTAGS: ctags
ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
$(am__define_uniq_tagged_files); \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscope: cscope.files
test ! -s cscope.files \
|| $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS)
clean-cscope:
-rm -f cscope.files
cscope.files: clean-cscope cscopelist
cscopelist: cscopelist-am
cscopelist-am: $(am__tagged_files)
list='$(am__tagged_files)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
-rm -f cscope.out cscope.in.out cscope.po.out cscope.files
distdir: $(BUILT_SOURCES)
$(MAKE) $(AM_MAKEFLAGS) distdir-am
distdir-am: $(DISTFILES)
$(am__remove_distdir)
test -d "$(distdir)" || mkdir "$(distdir)"
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
-test -n "$(am__skip_mode_fix)" \
|| find "$(distdir)" -type d ! -perm -755 \
-exec chmod u+rwx,go+rx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \; -o \
! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
|| chmod -R a+r "$(distdir)"
dist-gzip: distdir
tardir=$(distdir) && $(am__tar) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).tar.gz
$(am__post_remove_distdir)
dist-bzip2: distdir
tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2
$(am__post_remove_distdir)
dist-lzip: distdir
tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz
$(am__post_remove_distdir)
dist-xz: distdir
tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz
$(am__post_remove_distdir)
dist-zstd: distdir
tardir=$(distdir) && $(am__tar) | zstd -c $${ZSTD_CLEVEL-$${ZSTD_OPT--19}} >$(distdir).tar.zst
$(am__post_remove_distdir)
dist-tarZ: distdir
@echo WARNING: "Support for distribution archives compressed with" \
"legacy program 'compress' is deprecated." >&2
@echo WARNING: "It will be removed altogether in Automake 2.0" >&2
tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
$(am__post_remove_distdir)
dist-shar: distdir
@echo WARNING: "Support for shar distribution archives is" \
"deprecated." >&2
@echo WARNING: "It will be removed altogether in Automake 2.0" >&2
shar $(distdir) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).shar.gz
$(am__post_remove_distdir)
dist-zip: distdir
-rm -f $(distdir).zip
zip -rq $(distdir).zip $(distdir)
$(am__post_remove_distdir)
dist dist-all:
$(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:'
$(am__post_remove_distdir)
# This target untars the dist file and tries a VPATH configuration. Then
# it guarantees that the distribution is self-contained by making another
# tarfile.
distcheck: dist
case '$(DIST_ARCHIVES)' in \
*.tar.gz*) \
eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).tar.gz | $(am__untar) ;;\
*.tar.bz2*) \
bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\
*.tar.lz*) \
lzip -dc $(distdir).tar.lz | $(am__untar) ;;\
*.tar.xz*) \
xz -dc $(distdir).tar.xz | $(am__untar) ;;\
*.tar.Z*) \
uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
*.shar.gz*) \
eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\
*.zip*) \
unzip $(distdir).zip ;;\
*.tar.zst*) \
zstd -dc $(distdir).tar.zst | $(am__untar) ;;\
esac
chmod -R a-w $(distdir)
chmod u+w $(distdir)
mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst
chmod a-w $(distdir)
test -d $(distdir)/_build || exit 0; \
dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
&& dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
&& am__cwd=`pwd` \
&& $(am__cd) $(distdir)/_build/sub \
&& ../../configure \
$(AM_DISTCHECK_CONFIGURE_FLAGS) \
$(DISTCHECK_CONFIGURE_FLAGS) \
--srcdir=../.. --prefix="$$dc_install_base" \
&& $(MAKE) $(AM_MAKEFLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) $(AM_DISTCHECK_DVI_TARGET) \
&& $(MAKE) $(AM_MAKEFLAGS) check \
&& $(MAKE) $(AM_MAKEFLAGS) install \
&& $(MAKE) $(AM_MAKEFLAGS) installcheck \
&& $(MAKE) $(AM_MAKEFLAGS) uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
distuninstallcheck \
&& chmod -R a-w "$$dc_install_base" \
&& ({ \
(cd ../.. && umask 077 && mkdir "$$dc_destdir") \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
} || { rm -rf "$$dc_destdir"; exit 1; }) \
&& rm -rf "$$dc_destdir" \
&& $(MAKE) $(AM_MAKEFLAGS) dist \
&& rm -rf $(DIST_ARCHIVES) \
&& $(MAKE) $(AM_MAKEFLAGS) distcleancheck \
&& cd "$$am__cwd" \
|| exit 1
$(am__post_remove_distdir)
@(echo "$(distdir) archives ready for distribution: "; \
list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
distuninstallcheck:
@test -n '$(distuninstallcheck_dir)' || { \
echo 'ERROR: trying to run $@ with an empty' \
'$$(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
$(am__cd) '$(distuninstallcheck_dir)' || { \
echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left after uninstall:" ; \
if test -n "$(DESTDIR)"; then \
echo " (check DESTDIR support)"; \
fi ; \
$(distuninstallcheck_listfiles) ; \
exit 1; } >&2
distcleancheck: distclean
@if test '$(srcdir)' = . ; then \
echo "ERROR: distcleancheck can only run from a VPATH build" ; \
exit 1 ; \
fi
@test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left in build directory after distclean:" ; \
$(distcleancheck_listfiles) ; \
exit 1; } >&2
check-am: all-am
check: check-am
all-am: Makefile $(PROGRAMS) $(MANS)
installdirs:
for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
-test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
-rm -f src/$(DEPDIR)/$(am__dirstamp)
-rm -f src/$(am__dirstamp)
-test -z "$(DISTCLEANFILES)" || rm -f $(DISTCLEANFILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-binPROGRAMS clean-generic mostlyclean-am
distclean: distclean-am
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -f src/$(DEPDIR)/cache.Po
-rm -f src/$(DEPDIR)/config.Po
-rm -f src/$(DEPDIR)/fuse_local.Po
-rm -f src/$(DEPDIR)/link.Po
-rm -f src/$(DEPDIR)/log.Po
-rm -f src/$(DEPDIR)/main.Po
-rm -f src/$(DEPDIR)/memcache.Po
-rm -f src/$(DEPDIR)/network.Po
-rm -f src/$(DEPDIR)/sonic.Po
-rm -f src/$(DEPDIR)/util.Po
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am: install-man
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am: install-binPROGRAMS
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man: install-man1
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -rf $(top_srcdir)/autom4te.cache
-rm -f src/$(DEPDIR)/cache.Po
-rm -f src/$(DEPDIR)/config.Po
-rm -f src/$(DEPDIR)/fuse_local.Po
-rm -f src/$(DEPDIR)/link.Po
-rm -f src/$(DEPDIR)/log.Po
-rm -f src/$(DEPDIR)/main.Po
-rm -f src/$(DEPDIR)/memcache.Po
-rm -f src/$(DEPDIR)/network.Po
-rm -f src/$(DEPDIR)/sonic.Po
-rm -f src/$(DEPDIR)/util.Po
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-binPROGRAMS uninstall-man
uninstall-man: uninstall-man1
.MAKE: install-am install-strip
.PHONY: CTAGS GTAGS TAGS all all-am am--depfiles am--refresh check \
check-am clean clean-binPROGRAMS clean-cscope clean-generic \
cscope cscopelist-am ctags ctags-am dist dist-all dist-bzip2 \
dist-gzip dist-lzip dist-shar dist-tarZ dist-xz dist-zip \
dist-zstd distcheck distclean distclean-compile \
distclean-generic distclean-tags distcleancheck distdir \
distuninstallcheck dvi dvi-am html html-am info info-am \
install install-am install-binPROGRAMS install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am install-info \
install-info-am install-man install-man1 install-pdf \
install-pdf-am install-ps install-ps-am install-strip \
installcheck installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-compile \
mostlyclean-generic pdf pdf-am ps ps-am tags tags-am uninstall \
uninstall-am uninstall-binPROGRAMS uninstall-man \
uninstall-man1
.PRECIOUS: Makefile
# %.o: $(srcdir)/src/%.c
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
# httpdirfs: $(COBJS)
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
man: doc/man/httpdirfs.1
doc/man/httpdirfs.1: httpdirfs
mkdir -p doc/man
rm -f doc/man/httpdirfs.1.tmp
help2man --name "mount HTTP directory as a virtual filesystem" \
--no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1.tmp
mv doc/man/httpdirfs.1.tmp doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
astyle --style=kr --align-pointer=name --max-code-length=80 src/*.c src/*.h
.PHONY: man doc format
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,3 +1,8 @@
[![CodeQL](https://github.com/fangfufu/httpdirfs/actions/workflows/codeql.yml/badge.svg)](https://github.com/fangfufu/httpdirfs/actions/workflows/codeql.yml)
[![CodeFactor](https://www.codefactor.io/repository/github/fangfufu/httpdirfs/badge)](https://www.codefactor.io/repository/github/fangfufu/httpdirfs)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/30af0a5b4d6f4a4d83ddb68f5193ad23)](https://app.codacy.com/gh/fangfufu/httpdirfs/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=fangfufu_httpdirfs&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=fangfufu_httpdirfs)
# HTTPDirFS - HTTP Directory Filesystem with a permanent cache, and Airsonic / Subsonic server support!
Have you ever wanted to mount those HTTP directory listings as if it was a
@ -22,9 +27,9 @@ present a HTTP directory listing.
## Installation
Please note if you install HTTDirFS from a repository, it can be outdated.
### Debian 11 "Bullseye"
HTTPDirFS is available as a package in Debian 11 "Bullseye", If you are on
Debian Bullseye, you can simply run the following
### Debian 12 "Bookworm"
HTTPDirFS is available as a package in Debian 12 "Bookworm", If you are on
Debian Bookworm, you can simply run the following
command as ``root``:
apt install httpdirfs
@ -42,49 +47,37 @@ HTTPDirFS is available in the
## Compilation
### Ubuntu
Under Ubuntu 18.04.4 LTS, you need the following packages:
Under Ubuntu 22.04 LTS, you need the following packages:
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev help2man
libexpat1-dev pkg-config autoconf
### Debian 11 "Bullseye" and Debian 10 "Buster"
Under Debian 10 "Buster" and newer versions, you need the following packages:
### Debian 12 "Bookworm"
Under Debian 12 "Bookworm" and newer versions, you need the following packages:
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev
### Debian 9 "Stretch"
Under Debian 9 "Stretch", you need the following packages:
libgumbo-dev libfuse-dev libssl1.0-dev libcurl4-openssl-dev
If you get the following warnings during compilation,
/usr/bin/ld: warning: libcrypto.so.1.0.2, needed by /usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/libcurl.so, may conflict with libcrypto.so.1.1
then this program will crash if you connect to HTTPS website. You need to check
if you have ``libssl1.0-dev`` installed rather than ``libssl-dev``.
This is you likely have the binaries of OpenSSL 1.0.2 installed alongside with
the header files for OpenSSL 1.1. The header files for OpenSSL 1.0.2 link in
additional mutex related callback functions, whereas the header files for
OpenSSL 1.1 do not.
You can check your SSL engine version using the ``--version`` flag.
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev help2man
libexpat1-dev pkg-config autoconf
### FreeBSD
The following dependencies are required from either pkg or ports:
Packages:
gmake fusefs-libs gumbo e2fsprogs-libuuid curl expat
gmake fusefs-libs gumbo e2fsprogs-libuuid curl expat pkgconf help2man
If you want to be ableto build the documentation ("gmake doc") you also need
doxygen (devel/doxygen).
Ports:
devel/gmake sysutils/fusefs-libs devel/gumbo misc/e2fsprogs-libuuid ftp/curl textproc/expat2
devel/gmake sysutils/fusefs-libs devel/gumbo misc/e2fsprogs-libuuid ftp/curl textproc/expat2 devel/pkgconf devel/doxygen misc/help2man
**Note:** If you want brotli compression support, you will need to install curl
from ports and enable the option.
You can then build + install with:
./configure
gmake
sudo gmake install
@ -92,12 +85,16 @@ Alternatively, you may use the FreeBSD [ports(7)](https://man.freebsd.org/ports/
infrastructure to build HTTPDirFS from source with the modifications you need.
### macOS
You need to install macFUSE, cURL, gumbo, and OpenSSL from Homebrew:
You need to install some packages from Homebrew:
brew install macfuse curl gumbo-parser openssl pkg-config
brew install macfuse curl gumbo-parser openssl pkg-config help2man
If you want to be able to build the documentation ("make doc") you also need
help2man, doxygen, and graphviz.
Build and install:
./configure
make
sudo make install
@ -139,9 +136,9 @@ HTTPDirFS options:
--retry-wait Set delay in seconds before retrying an HTTP request
after encountering an error. (default: 5)
--user-agent Set user agent string (default: "HTTPDirFS")
--no-range-check Disable the build-in check for the server's support
--no-range-check Disable the built-in check for the server's support
for HTTP range requests
--insecure_tls Disable licurl TLS certificate verification by
--insecure-tls Disable licurl TLS certificate verification by
setting CURLOPT_SSL_VERIFYHOST to 0
--single-file-mode Single file mode - rather than mounting a whole
directory, present a single file inside a virtual
@ -268,7 +265,7 @@ Alternatively, you can specify your own configuration file by using the
### Log levels
You can control how much log HTTPDirFS outputs by setting the
``HTTPDIRFS_LOG_LEVEL`` enviromental variable. For details of the different
``HTTPDIRFS_LOG_LEVEL`` environmental variable. For details of the different
types of log that are supported, please refer to
[log.h](https://github.com/fangfufu/httpdirfs/blob/master/src/log.h) and
[log.c](https://github.com/fangfufu/httpdirfs/blob/master/src/log.c).
@ -310,6 +307,8 @@ for the technical and moral support. Your wisdom is much appreciated!
compatibility patches.
- I would like to thank [hiliev](https://github.com/hiliev) for providing macOS
compatibility patches.
- I would like to thank [Jonathan Kamens](https://github.com/jikamens) for providing
a whole bunch of code improvements and the improved build system.
- I would like to thank [-Archivist](https://www.reddit.com/user/-Archivist/)
for not providing FTP or WebDAV access to his server. This piece of software was
written in direct response to his appalling behaviour.

1548
aclocal.m4 vendored Normal file

File diff suppressed because it is too large Load Diff

343
compile Executable file
View File

@ -0,0 +1,343 @@
#! /bin/sh
# Wrapper for compilers which do not understand '-c -o'.
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1999-2021 Free Software Foundation, Inc.
# Written by Tom Tromey <tromey@cygnus.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to <bug-automake@gnu.org> or send patches to
# <automake-patches@gnu.org>.
nl='
'
# We need space, tab and new line, in precisely that order. Quoting is
# there to prevent tools from complaining about whitespace usage.
IFS=" "" $nl"
file_conv=
# func_file_conv build_file lazy
# Convert a $build file to $host form and store it in $file
# Currently only supports Windows hosts. If the determined conversion
# type is listed in (the comma separated) LAZY, no conversion will
# take place.
func_file_conv ()
{
file=$1
case $file in
/ | /[!/]*) # absolute file, and not a UNC file
if test -z "$file_conv"; then
# lazily determine how to convert abs files
case `uname -s` in
MINGW*)
file_conv=mingw
;;
CYGWIN* | MSYS*)
file_conv=cygwin
;;
*)
file_conv=wine
;;
esac
fi
case $file_conv/,$2, in
*,$file_conv,*)
;;
mingw/*)
file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'`
;;
cygwin/* | msys/*)
file=`cygpath -m "$file" || echo "$file"`
;;
wine/*)
file=`winepath -w "$file" || echo "$file"`
;;
esac
;;
esac
}
# func_cl_dashL linkdir
# Make cl look for libraries in LINKDIR
func_cl_dashL ()
{
func_file_conv "$1"
if test -z "$lib_path"; then
lib_path=$file
else
lib_path="$lib_path;$file"
fi
linker_opts="$linker_opts -LIBPATH:$file"
}
# func_cl_dashl library
# Do a library search-path lookup for cl
func_cl_dashl ()
{
lib=$1
found=no
save_IFS=$IFS
IFS=';'
for dir in $lib_path $LIB
do
IFS=$save_IFS
if $shared && test -f "$dir/$lib.dll.lib"; then
found=yes
lib=$dir/$lib.dll.lib
break
fi
if test -f "$dir/$lib.lib"; then
found=yes
lib=$dir/$lib.lib
break
fi
if test -f "$dir/lib$lib.a"; then
found=yes
lib=$dir/lib$lib.a
break
fi
done
IFS=$save_IFS
if test "$found" != yes; then
lib=$lib.lib
fi
}
# func_cl_wrapper cl arg...
# Adjust compile command to suit cl
func_cl_wrapper ()
{
# Assume a capable shell
lib_path=
shared=:
linker_opts=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
eat=1
case $2 in
*.o | *.[oO][bB][jJ])
func_file_conv "$2"
set x "$@" -Fo"$file"
shift
;;
*)
func_file_conv "$2"
set x "$@" -Fe"$file"
shift
;;
esac
;;
-I)
eat=1
func_file_conv "$2" mingw
set x "$@" -I"$file"
shift
;;
-I*)
func_file_conv "${1#-I}" mingw
set x "$@" -I"$file"
shift
;;
-l)
eat=1
func_cl_dashl "$2"
set x "$@" "$lib"
shift
;;
-l*)
func_cl_dashl "${1#-l}"
set x "$@" "$lib"
shift
;;
-L)
eat=1
func_cl_dashL "$2"
;;
-L*)
func_cl_dashL "${1#-L}"
;;
-static)
shared=false
;;
-Wl,*)
arg=${1#-Wl,}
save_ifs="$IFS"; IFS=','
for flag in $arg; do
IFS="$save_ifs"
linker_opts="$linker_opts $flag"
done
IFS="$save_ifs"
;;
-Xlinker)
eat=1
linker_opts="$linker_opts $2"
;;
-*)
set x "$@" "$1"
shift
;;
*.cc | *.CC | *.cxx | *.CXX | *.[cC]++)
func_file_conv "$1"
set x "$@" -Tp"$file"
shift
;;
*.c | *.cpp | *.CPP | *.lib | *.LIB | *.Lib | *.OBJ | *.obj | *.[oO])
func_file_conv "$1" mingw
set x "$@" "$file"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -n "$linker_opts"; then
linker_opts="-link$linker_opts"
fi
exec "$@" $linker_opts
exit 1
}
eat=
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: compile [--help] [--version] PROGRAM [ARGS]
Wrapper for compilers which do not understand '-c -o'.
Remove '-o dest.o' from ARGS, run PROGRAM with the remaining
arguments, and rename the output as expected.
If you are trying to build a whole package this is not the
right script to run: please start by reading the file 'INSTALL'.
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "compile $scriptversion"
exit $?
;;
cl | *[/\\]cl | cl.exe | *[/\\]cl.exe | \
icl | *[/\\]icl | icl.exe | *[/\\]icl.exe )
func_cl_wrapper "$@" # Doesn't return...
;;
esac
ofile=
cfile=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
# So we strip '-o arg' only if arg is an object.
eat=1
case $2 in
*.o | *.obj)
ofile=$2
;;
*)
set x "$@" -o "$2"
shift
;;
esac
;;
*.c)
cfile=$1
set x "$@" "$1"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -z "$ofile" || test -z "$cfile"; then
# If no '-o' option was seen then we might have been invoked from a
# pattern rule where we don't need one. That is ok -- this is a
# normal compilation that the losing compiler can handle. If no
# '.c' file was seen then we are probably linking. That is also
# ok.
exec "$@"
fi
# Name of file we expect compiler to create.
cofile=`echo "$cfile" | sed 's|^.*[\\/]||; s|^[a-zA-Z]:||; s/\.c$/.o/'`
# Create the lock directory.
# Note: use '[/\\:.-]' here to ensure that we don't use the same name
# that we are using for the .o file. Also, base the name on the expected
# object file name, since that is what matters with a parallel build.
lockdir=`echo "$cofile" | sed -e 's|[/\\:.-]|_|g'`.d
while true; do
if mkdir "$lockdir" >/dev/null 2>&1; then
break
fi
sleep 1
done
# FIXME: race condition here if user kills between mkdir and trap.
trap "rmdir '$lockdir'; exit 1" 1 2 15
# Run the compile.
"$@"
ret=$?
if test -f "$cofile"; then
test "$cofile" = "$ofile" || mv "$cofile" "$ofile"
elif test -f "${cofile}bj"; then
test "${cofile}bj" = "$ofile" || mv "${cofile}bj" "$ofile"
fi
rmdir "$lockdir"
exit $ret
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# End:

1747
config.guess vendored Executable file

File diff suppressed because it is too large Load Diff

1883
config.sub vendored Executable file

File diff suppressed because it is too large Load Diff

5952
configure vendored Executable file

File diff suppressed because it is too large Load Diff

14
configure.ac Normal file
View File

@ -0,0 +1,14 @@
AC_INIT([httpdirfs],[1.2.5])
AC_CANONICAL_BUILD
AC_CONFIG_FILES([Makefile Doxyfile])
AC_PROG_CC
AC_SEARCH_LIBS([backtrace],[execinfo])
# Because we use $(fuse_LIBS) in $(CFLAGS); see comment in Makefile.in
AX_CHECK_COMPILE_FLAG([-Wunused-command-line-argument],[NUCLA=-Wno-unused-command-line-argument],[-Werror])
AC_SUBST([NUCLA])
AM_INIT_AUTOMAKE([foreign subdir-objects])
PKG_CHECK_MODULES([pkgconf],[gumbo libcurl uuid expat openssl])
# This is separate because we need to be able to use $(fuse_LIBS) in CFLAGS
PKG_CHECK_MODULES([fuse],[fuse])
AC_OUTPUT

786
depcomp Executable file
View File

@ -0,0 +1,786 @@
#! /bin/sh
# depcomp - compile a program generating dependencies as side-effects
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1999-2021 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# Originally written by Alexandre Oliva <oliva@dcc.unicamp.br>.
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: depcomp [--help] [--version] PROGRAM [ARGS]
Run PROGRAMS ARGS to compile a file, generating dependencies
as side-effects.
Environment variables:
depmode Dependency tracking mode.
source Source file read by 'PROGRAMS ARGS'.
object Object file output by 'PROGRAMS ARGS'.
DEPDIR directory where to store dependencies.
depfile Dependency file to output.
tmpdepfile Temporary file to use when outputting dependencies.
libtool Whether libtool is used (yes/no).
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "depcomp $scriptversion"
exit $?
;;
esac
# Get the directory component of the given path, and save it in the
# global variables '$dir'. Note that this directory component will
# be either empty or ending with a '/' character. This is deliberate.
set_dir_from ()
{
case $1 in
*/*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;;
*) dir=;;
esac
}
# Get the suffix-stripped basename of the given path, and save it the
# global variable '$base'.
set_base_from ()
{
base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'`
}
# If no dependency file was actually created by the compiler invocation,
# we still have to create a dummy depfile, to avoid errors with the
# Makefile "include basename.Plo" scheme.
make_dummy_depfile ()
{
echo "#dummy" > "$depfile"
}
# Factor out some common post-processing of the generated depfile.
# Requires the auxiliary global variable '$tmpdepfile' to be set.
aix_post_process_depfile ()
{
# If the compiler actually managed to produce a dependency file,
# post-process it.
if test -f "$tmpdepfile"; then
# Each line is of the form 'foo.o: dependency.h'.
# Do two passes, one to just change these to
# $object: dependency.h
# and one to simply output
# dependency.h:
# which is needed to avoid the deleted-header problem.
{ sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile"
sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile"
} > "$depfile"
rm -f "$tmpdepfile"
else
make_dummy_depfile
fi
}
# A tabulation character.
tab=' '
# A newline character.
nl='
'
# Character ranges might be problematic outside the C locale.
# These definitions help.
upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ
lower=abcdefghijklmnopqrstuvwxyz
digits=0123456789
alpha=${upper}${lower}
if test -z "$depmode" || test -z "$source" || test -z "$object"; then
echo "depcomp: Variables source, object and depmode must be set" 1>&2
exit 1
fi
# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
depfile=${depfile-`echo "$object" |
sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
rm -f "$tmpdepfile"
# Avoid interferences from the environment.
gccflag= dashmflag=
# Some modes work just like other modes, but use different flags. We
# parameterize here, but still list the modes in the big case below,
# to make depend.m4 easier to write. Note that we *cannot* use a case
# here, because this file can only contain one case statement.
if test "$depmode" = hp; then
# HP compiler uses -M and no extra arg.
gccflag=-M
depmode=gcc
fi
if test "$depmode" = dashXmstdout; then
# This is just like dashmstdout with a different argument.
dashmflag=-xM
depmode=dashmstdout
fi
cygpath_u="cygpath -u -f -"
if test "$depmode" = msvcmsys; then
# This is just like msvisualcpp but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvisualcpp
fi
if test "$depmode" = msvc7msys; then
# This is just like msvc7 but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvc7
fi
if test "$depmode" = xlc; then
# IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information.
gccflag=-qmakedep=gcc,-MF
depmode=gcc
fi
case "$depmode" in
gcc3)
## gcc 3 implements dependency tracking that does exactly what
## we want. Yay! Note: for some reason libtool 1.4 doesn't like
## it if -MD -MP comes after the -MF stuff. Hmm.
## Unfortunately, FreeBSD c89 acceptance of flags depends upon
## the command line argument order; so add the flags where they
## appear in depend2.am. Note that the slowdown incurred here
## affects only configure: in makefiles, %FASTDEP% shortcuts this.
for arg
do
case $arg in
-c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;;
*) set fnord "$@" "$arg" ;;
esac
shift # fnord
shift # $arg
done
"$@"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
mv "$tmpdepfile" "$depfile"
;;
gcc)
## Note that this doesn't just cater to obsosete pre-3.x GCC compilers.
## but also to in-use compilers like IMB xlc/xlC and the HP C compiler.
## (see the conditional assignment to $gccflag above).
## There are various ways to get dependency output from gcc. Here's
## why we pick this rather obscure method:
## - Don't want to use -MD because we'd like the dependencies to end
## up in a subdir. Having to rename by hand is ugly.
## (We might end up doing this anyway to support other compilers.)
## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
## -MM, not -M (despite what the docs say). Also, it might not be
## supported by the other compilers which use the 'gcc' depmode.
## - Using -M directly means running the compiler twice (even worse
## than renaming).
if test -z "$gccflag"; then
gccflag=-MD,
fi
"$@" -Wp,"$gccflag$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The second -e expression handles DOS-style file names with drive
# letters.
sed -e 's/^[^:]*: / /' \
-e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
## This next piece of magic avoids the "deleted header file" problem.
## The problem is that when a header file which appears in a .P file
## is deleted, the dependency causes make to die (because there is
## typically no way to rebuild the header). We avoid this by adding
## dummy dependencies for each header file. Too bad gcc doesn't do
## this for us directly.
## Some versions of gcc put a space before the ':'. On the theory
## that the space means something, we add a space to the output as
## well. hp depmode also adds that space, but also prefixes the VPATH
## to the object. Take care to not repeat it in the output.
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
sgi)
if test "$libtool" = yes; then
"$@" "-Wp,-MDupdate,$tmpdepfile"
else
"$@" -MDupdate "$tmpdepfile"
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
echo "$object : \\" > "$depfile"
# Clip off the initial element (the dependent). Don't try to be
# clever and replace this with sed code, as IRIX sed won't handle
# lines with more than a fixed number of characters (4096 in
# IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
# the IRIX cc adds comments like '#:fec' to the end of the
# dependency line.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \
| tr "$nl" ' ' >> "$depfile"
echo >> "$depfile"
# The second pass generates a dummy entry for each header file.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
>> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile"
;;
xlc)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
aix)
# The C for AIX Compiler uses -M and outputs the dependencies
# in a .u file. In older versions, this file always lives in the
# current directory. Also, the AIX compiler puts '$object:' at the
# start of each line; $object doesn't have directory information.
# Version 6 uses the directory in both cases.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.u
tmpdepfile2=$base.u
tmpdepfile3=$dir.libs/$base.u
"$@" -Wc,-M
else
tmpdepfile1=$dir$base.u
tmpdepfile2=$dir$base.u
tmpdepfile3=$dir$base.u
"$@" -M
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
aix_post_process_depfile
;;
tcc)
# tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26
# FIXME: That version still under development at the moment of writing.
# Make that this statement remains true also for stable, released
# versions.
# It will wrap lines (doesn't matter whether long or short) with a
# trailing '\', as in:
#
# foo.o : \
# foo.c \
# foo.h \
#
# It will put a trailing '\' even on the last line, and will use leading
# spaces rather than leading tabs (at least since its commit 0394caf7
# "Emit spaces for -MD").
"$@" -MD -MF "$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each non-empty line is of the form 'foo.o : \' or ' dep.h \'.
# We have to change lines of the first kind to '$object: \'.
sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile"
# And for each line of the second kind, we have to emit a 'dep.h:'
# dummy dependency, to avoid the deleted-header problem.
sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile"
rm -f "$tmpdepfile"
;;
## The order of this option in the case statement is important, since the
## shell code in configure will try each of these formats in the order
## listed in this file. A plain '-MD' option would be understood by many
## compilers, so we must ensure this comes after the gcc and icc options.
pgcc)
# Portland's C compiler understands '-MD'.
# Will always output deps to 'file.d' where file is the root name of the
# source file under compilation, even if file resides in a subdirectory.
# The object file name does not affect the name of the '.d' file.
# pgcc 10.2 will output
# foo.o: sub/foo.c sub/foo.h
# and will wrap long lines using '\' :
# foo.o: sub/foo.c ... \
# sub/foo.h ... \
# ...
set_dir_from "$object"
# Use the source, not the object, to determine the base name, since
# that's sadly what pgcc will do too.
set_base_from "$source"
tmpdepfile=$base.d
# For projects that build the same source file twice into different object
# files, the pgcc approach of using the *source* file root name can cause
# problems in parallel builds. Use a locking strategy to avoid stomping on
# the same $tmpdepfile.
lockdir=$base.d-lock
trap "
echo '$0: caught signal, cleaning up...' >&2
rmdir '$lockdir'
exit 1
" 1 2 13 15
numtries=100
i=$numtries
while test $i -gt 0; do
# mkdir is a portable test-and-set.
if mkdir "$lockdir" 2>/dev/null; then
# This process acquired the lock.
"$@" -MD
stat=$?
# Release the lock.
rmdir "$lockdir"
break
else
# If the lock is being held by a different process, wait
# until the winning process is done or we timeout.
while test -d "$lockdir" && test $i -gt 0; do
sleep 1
i=`expr $i - 1`
done
fi
i=`expr $i - 1`
done
trap - 1 2 13 15
if test $i -le 0; then
echo "$0: failed to acquire lock after $numtries attempts" >&2
echo "$0: check lockdir '$lockdir'" >&2
exit 1
fi
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each line is of the form `foo.o: dependent.h',
# or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp2)
# The "hp" stanza above does not work with aCC (C++) and HP's ia64
# compilers, which have integrated preprocessors. The correct option
# to use with these is +Maked; it writes dependencies to a file named
# 'foo.d', which lands next to the object file, wherever that
# happens to be.
# Much of this is similar to the tru64 case; see comments there.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir.libs/$base.d
"$@" -Wc,+Maked
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
"$@" +Maked
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2"
do
test -f "$tmpdepfile" && break
done
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile"
# Add 'dependent.h:' lines.
sed -ne '2,${
s/^ *//
s/ \\*$//
s/$/:/
p
}' "$tmpdepfile" >> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile" "$tmpdepfile2"
;;
tru64)
# The Tru64 compiler uses -MD to generate dependencies as a side
# effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'.
# At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
# dependencies in 'foo.d' instead, so we check for that too.
# Subdirectories are respected.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
# Libtool generates 2 separate objects for the 2 libraries. These
# two compilations output dependencies in $dir.libs/$base.o.d and
# in $dir$base.o.d. We have to check for both files, because
# one of the two compilations can be disabled. We should prefer
# $dir$base.o.d over $dir.libs/$base.o.d because the latter is
# automatically cleaned when .libs/ is deleted, while ignoring
# the former would cause a distcleancheck panic.
tmpdepfile1=$dir$base.o.d # libtool 1.5
tmpdepfile2=$dir.libs/$base.o.d # Likewise.
tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504
"$@" -Wc,-MD
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
tmpdepfile3=$dir$base.d
"$@" -MD
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
# Same post-processing that is required for AIX mode.
aix_post_process_depfile
;;
msvc7)
if test "$libtool" = yes; then
showIncludes=-Wc,-showIncludes
else
showIncludes=-showIncludes
fi
"$@" $showIncludes > "$tmpdepfile"
stat=$?
grep -v '^Note: including file: ' "$tmpdepfile"
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The first sed program below extracts the file names and escapes
# backslashes for cygpath. The second sed program outputs the file
# name when reading, but also accumulates all include files in the
# hold buffer in order to output them again at the end. This only
# works with sed implementations that can handle large buffers.
sed < "$tmpdepfile" -n '
/^Note: including file: *\(.*\)/ {
s//\1/
s/\\/\\\\/g
p
}' | $cygpath_u | sort -u | sed -n '
s/ /\\ /g
s/\(.*\)/'"$tab"'\1 \\/p
s/.\(.*\) \\/\1:/
H
$ {
s/.*/'"$tab"'/
G
p
}' >> "$depfile"
echo >> "$depfile" # make sure the fragment doesn't end with a backslash
rm -f "$tmpdepfile"
;;
msvc7msys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
#nosideeffect)
# This comment above is used by automake to tell side-effect
# dependency tracking mechanisms from slower ones.
dashmstdout)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
test -z "$dashmflag" && dashmflag=-M
# Require at least two characters before searching for ':'
# in the target name. This is to cope with DOS-style filenames:
# a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise.
"$@" $dashmflag |
sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this sed invocation
# correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
dashXmstdout)
# This case only exists to satisfy depend.m4. It is never actually
# run, as this mode is specially recognized in the preamble.
exit 1
;;
makedepend)
"$@" || exit $?
# Remove any Libtool call
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# X makedepend
shift
cleared=no eat=no
for arg
do
case $cleared in
no)
set ""; shift
cleared=yes ;;
esac
if test $eat = yes; then
eat=no
continue
fi
case "$arg" in
-D*|-I*)
set fnord "$@" "$arg"; shift ;;
# Strip any option that makedepend may not understand. Remove
# the object too, otherwise makedepend will parse it as a source file.
-arch)
eat=yes ;;
-*|$object)
;;
*)
set fnord "$@" "$arg"; shift ;;
esac
done
obj_suffix=`echo "$object" | sed 's/^.*\././'`
touch "$tmpdepfile"
${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
rm -f "$depfile"
# makedepend may prepend the VPATH from the source file name to the object.
# No need to regex-escape $object, excess matching of '.' is harmless.
sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process the last invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed '1,2d' "$tmpdepfile" \
| tr ' ' "$nl" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile" "$tmpdepfile".bak
;;
cpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
"$@" -E \
| sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
-e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
| sed '$ s: \\$::' > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
cat < "$tmpdepfile" >> "$depfile"
sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvisualcpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
IFS=" "
for arg
do
case "$arg" in
-o)
shift
;;
$object)
shift
;;
"-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
set fnord "$@"
shift
shift
;;
*)
set fnord "$@" "$arg"
shift
shift
;;
esac
done
"$@" -E 2>/dev/null |
sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile"
echo "$tab" >> "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvcmsys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
none)
exec "$@"
;;
*)
echo "Unknown depmode $depmode" 1>&2
exit 1
;;
esac
exit 0
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# End:

View File

@ -1,269 +0,0 @@
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.48.1.
.TH PRINT_VERSION: "1" "August 2021" "print_version: HTTPDirFS version 1.2.3" "User Commands"
.SH NAME
print_version: \- manual page for print_version: HTTPDirFS version 1.2.3
.SH DESCRIPTION
print_version: HTTPDirFS version 1.2.3
print_version: libcurl SSL engine: OpenSSL/1.1.1k
usage: ./httpdirfs [options] URL mountpoint
.SS "general options:"
.TP
\fB\-\-config\fR
Specify a configuration file
.TP
\fB\-o\fR opt,[opt...]
Mount options
.TP
\fB\-h\fR \fB\-\-help\fR
Print help
.TP
\fB\-V\fR \fB\-\-version\fR
Print version
.SS "HTTPDirFS options:"
.TP
\fB\-u\fR \fB\-\-username\fR
HTTP authentication username
.TP
\fB\-p\fR \fB\-\-password\fR
HTTP authentication password
.TP
\fB\-P\fR \fB\-\-proxy\fR
Proxy for libcurl, for more details refer to
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html
.TP
\fB\-\-proxy\-username\fR
Username for the proxy
.TP
\fB\-\-proxy\-password\fR
Password for the proxy
.TP
\fB\-\-cache\fR
Enable cache (default: off)
.TP
\fB\-\-cache\-location\fR
Set a custom cache location
(default: "${XDG_CACHE_HOME}/httpdirfs")
.TP
\fB\-\-dl\-seg\-size\fR
Set cache download segment size, in MB (default: 8)
Note: this setting is ignored if previously
cached data is found for the requested file.
.TP
\fB\-\-max\-seg\-count\fR
Set maximum number of download segments a file
can have. (default: 128*1024)
With the default setting, the maximum memory usage
per file is 128KB. This allows caching files up
to 1TB in size using the default segment size.
.TP
\fB\-\-max\-conns\fR
Set maximum number of network connections that
libcurl is allowed to make. (default: 10)
.TP
\fB\-\-retry\-wait\fR
Set delay in seconds before retrying an HTTP request
after encountering an error. (default: 5)
.TP
\fB\-\-user\-agent\fR
Set user agent string (default: "HTTPDirFS")
.TP
\fB\-\-no\-range\-check\fR
Disable the build\-in check for the server's support
for HTTP range requests
.TP
\fB\-\-insecure_tls\fR
Disable licurl TLS certificate verification by
setting CURLOPT_SSL_VERIFYHOST to 0
.TP
\fB\-\-single\-file\-mode\fR
Single file mode \- rather than mounting a whole
directory, present a single file inside a virtual
directory.
.IP
For mounting a Airsonic / Subsonic server:
.TP
\fB\-\-sonic\-username\fR
The username for your Airsonic / Subsonic server
.TP
\fB\-\-sonic\-password\fR
The password for your Airsonic / Subsonic server
.TP
\fB\-\-sonic\-id3\fR
Enable ID3 mode \- this present the server content in
Artist/Album/Song layout
.TP
\fB\-\-sonic\-insecure\fR
Authenticate against your Airsonic / Subsonic server
using the insecure username / hex encoded password
scheme
.SS "FUSE options:"
.TP
\fB\-d\fR \fB\-o\fR debug
enable debug output (implies \fB\-f\fR)
.TP
\fB\-f\fR
foreground operation
.TP
\fB\-s\fR
disable multi\-threaded operation
.TP
\fB\-o\fR allow_other
allow access to other users
.TP
\fB\-o\fR allow_root
allow access to root
.TP
\fB\-o\fR auto_unmount
auto unmount on process termination
.TP
\fB\-o\fR nonempty
allow mounts over non\-empty file/dir
.HP
\fB\-o\fR default_permissions enable permission checking by kernel
.TP
\fB\-o\fR fsname=NAME
set filesystem name
.TP
\fB\-o\fR subtype=NAME
set filesystem type
.TP
\fB\-o\fR large_read
issue large read requests (2.4 only)
.TP
\fB\-o\fR max_read=N
set maximum size of read requests
.TP
\fB\-o\fR hard_remove
immediate removal (don't hide files)
.TP
\fB\-o\fR use_ino
let filesystem set inode numbers
.TP
\fB\-o\fR readdir_ino
try to fill in d_ino in readdir
.TP
\fB\-o\fR direct_io
use direct I/O
.TP
\fB\-o\fR kernel_cache
cache files in kernel
.TP
\fB\-o\fR [no]auto_cache
enable caching based on modification times (off)
.TP
\fB\-o\fR umask=M
set file permissions (octal)
.TP
\fB\-o\fR uid=N
set file owner
.TP
\fB\-o\fR gid=N
set file group
.TP
\fB\-o\fR entry_timeout=T
cache timeout for names (1.0s)
.TP
\fB\-o\fR negative_timeout=T
cache timeout for deleted names (0.0s)
.TP
\fB\-o\fR attr_timeout=T
cache timeout for attributes (1.0s)
.TP
\fB\-o\fR ac_attr_timeout=T
auto cache timeout for attributes (attr_timeout)
.TP
\fB\-o\fR noforget
never forget cached inodes
.TP
\fB\-o\fR remember=T
remember cached inodes for T seconds (0s)
.TP
\fB\-o\fR nopath
don't supply path if not necessary
.TP
\fB\-o\fR intr
allow requests to be interrupted
.TP
\fB\-o\fR intr_signal=NUM
signal to send on interrupt (10)
.TP
\fB\-o\fR modules=M1[:M2...]
names of modules to push onto filesystem stack
.TP
\fB\-o\fR max_write=N
set maximum size of write requests
.TP
\fB\-o\fR max_readahead=N
set maximum readahead
.TP
\fB\-o\fR max_background=N
set number of maximum background requests
.TP
\fB\-o\fR congestion_threshold=N
set kernel's congestion threshold
.TP
\fB\-o\fR async_read
perform reads asynchronously (default)
.TP
\fB\-o\fR sync_read
perform reads synchronously
.TP
\fB\-o\fR atomic_o_trunc
enable atomic open+truncate support
.TP
\fB\-o\fR big_writes
enable larger than 4kB writes
.TP
\fB\-o\fR no_remote_lock
disable remote file locking
.TP
\fB\-o\fR no_remote_flock
disable remote file locking (BSD)
.HP
\fB\-o\fR no_remote_posix_lock disable remove file locking (POSIX)
.TP
\fB\-o\fR [no_]splice_write
use splice to write to the fuse device
.TP
\fB\-o\fR [no_]splice_move
move data while splicing to the fuse device
.TP
\fB\-o\fR [no_]splice_read
use splice to read from the fuse device
.PP
Module options:
.PP
[iconv]
.TP
\fB\-o\fR from_code=CHARSET
original encoding of file names (default: UTF\-8)
.TP
\fB\-o\fR to_code=CHARSET
new encoding of the file names (default: ANSI_X3.4\-1968)
.PP
[subdir]
.TP
\fB\-o\fR subdir=DIR
prepend this directory to all paths (mandatory)
.TP
\fB\-o\fR [no]rellinks
transform absolute symlinks to relative
.PP
print_version: libcurl SSL engine: OpenSSL/1.1.1k
print_version: HTTPDirFS version 1.2.3
print_version: libcurl SSL engine: OpenSSL/1.1.1k
FUSE library version: 2.9.9
fusermount3 version: 3.10.3
using FUSE kernel interface version 7.19
.SH "SEE ALSO"
The full documentation for
.B print_version:
is maintained as a Texinfo manual. If the
.B info
and
.B print_version:
programs are properly installed at your site, the command
.IP
.B info print_version:
.PP
should give you access to the complete manual.

533
install-sh Executable file
View File

@ -0,0 +1,533 @@
#!/bin/sh
# install - install a program, script, or datafile
scriptversion=2020-11-14.01; # UTC
# This originates from X11R5 (mit/util/scripts/install.sh), which was
# later released in X11R6 (xc/config/util/install.sh) with the
# following copyright and license.
#
# Copyright (C) 1994 X Consortium
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of the X Consortium shall not
# be used in advertising or otherwise to promote the sale, use or other deal-
# ings in this Software without prior written authorization from the X Consor-
# tium.
#
#
# FSF changes to this file are in the public domain.
#
# Calling this script install-sh is preferred over install.sh, to prevent
# 'make' implicit rules from creating a file called install from it
# when there is no Makefile.
#
# This script is compatible with the BSD install script, but was written
# from scratch.
tab=' '
nl='
'
IFS=" $tab$nl"
# Set DOITPROG to "echo" to test this script.
doit=${DOITPROG-}
doit_exec=${doit:-exec}
# Put in absolute file names if you don't have them in your path;
# or use environment vars.
chgrpprog=${CHGRPPROG-chgrp}
chmodprog=${CHMODPROG-chmod}
chownprog=${CHOWNPROG-chown}
cmpprog=${CMPPROG-cmp}
cpprog=${CPPROG-cp}
mkdirprog=${MKDIRPROG-mkdir}
mvprog=${MVPROG-mv}
rmprog=${RMPROG-rm}
stripprog=${STRIPPROG-strip}
posix_mkdir=
# Desired mode of installed file.
mode=0755
# Create dirs (including intermediate dirs) using mode 755.
# This is like GNU 'install' as of coreutils 8.32 (2020).
mkdir_umask=22
backupsuffix=
chgrpcmd=
chmodcmd=$chmodprog
chowncmd=
mvcmd=$mvprog
rmcmd="$rmprog -f"
stripcmd=
src=
dst=
dir_arg=
dst_arg=
copy_on_change=false
is_target_a_directory=possibly
usage="\
Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
or: $0 [OPTION]... SRCFILES... DIRECTORY
or: $0 [OPTION]... -t DIRECTORY SRCFILES...
or: $0 [OPTION]... -d DIRECTORIES...
In the 1st form, copy SRCFILE to DSTFILE.
In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
In the 4th, create DIRECTORIES.
Options:
--help display this help and exit.
--version display version info and exit.
-c (ignored)
-C install only if different (preserve data modification time)
-d create directories instead of installing files.
-g GROUP $chgrpprog installed files to GROUP.
-m MODE $chmodprog installed files to MODE.
-o USER $chownprog installed files to USER.
-p pass -p to $cpprog.
-s $stripprog installed files.
-S SUFFIX attempt to back up existing files, with suffix SUFFIX.
-t DIRECTORY install into DIRECTORY.
-T report an error if DSTFILE is a directory.
Environment variables override the default commands:
CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG
RMPROG STRIPPROG
By default, rm is invoked with -f; when overridden with RMPROG,
it's up to you to specify -f if you want it.
If -S is not specified, no backups are attempted.
Email bug reports to bug-automake@gnu.org.
Automake home page: https://www.gnu.org/software/automake/
"
while test $# -ne 0; do
case $1 in
-c) ;;
-C) copy_on_change=true;;
-d) dir_arg=true;;
-g) chgrpcmd="$chgrpprog $2"
shift;;
--help) echo "$usage"; exit $?;;
-m) mode=$2
case $mode in
*' '* | *"$tab"* | *"$nl"* | *'*'* | *'?'* | *'['*)
echo "$0: invalid mode: $mode" >&2
exit 1;;
esac
shift;;
-o) chowncmd="$chownprog $2"
shift;;
-p) cpprog="$cpprog -p";;
-s) stripcmd=$stripprog;;
-S) backupsuffix="$2"
shift;;
-t)
is_target_a_directory=always
dst_arg=$2
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
shift;;
-T) is_target_a_directory=never;;
--version) echo "$0 $scriptversion"; exit $?;;
--) shift
break;;
-*) echo "$0: invalid option: $1" >&2
exit 1;;
*) break;;
esac
shift
done
# We allow the use of options -d and -T together, by making -d
# take the precedence; this is for compatibility with GNU install.
if test -n "$dir_arg"; then
if test -n "$dst_arg"; then
echo "$0: target directory not allowed when installing a directory." >&2
exit 1
fi
fi
if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then
# When -d is used, all remaining arguments are directories to create.
# When -t is used, the destination is already specified.
# Otherwise, the last argument is the destination. Remove it from $@.
for arg
do
if test -n "$dst_arg"; then
# $@ is not empty: it contains at least $arg.
set fnord "$@" "$dst_arg"
shift # fnord
fi
shift # arg
dst_arg=$arg
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
done
fi
if test $# -eq 0; then
if test -z "$dir_arg"; then
echo "$0: no input file specified." >&2
exit 1
fi
# It's OK to call 'install-sh -d' without argument.
# This can happen when creating conditional directories.
exit 0
fi
if test -z "$dir_arg"; then
if test $# -gt 1 || test "$is_target_a_directory" = always; then
if test ! -d "$dst_arg"; then
echo "$0: $dst_arg: Is not a directory." >&2
exit 1
fi
fi
fi
if test -z "$dir_arg"; then
do_exit='(exit $ret); exit $ret'
trap "ret=129; $do_exit" 1
trap "ret=130; $do_exit" 2
trap "ret=141; $do_exit" 13
trap "ret=143; $do_exit" 15
# Set umask so as not to create temps with too-generous modes.
# However, 'strip' requires both read and write access to temps.
case $mode in
# Optimize common cases.
*644) cp_umask=133;;
*755) cp_umask=22;;
*[0-7])
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw='% 200'
fi
cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;;
*)
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw=,u+rw
fi
cp_umask=$mode$u_plus_rw;;
esac
fi
for src
do
# Protect names problematic for 'test' and other utilities.
case $src in
-* | [=\(\)!]) src=./$src;;
esac
if test -n "$dir_arg"; then
dst=$src
dstdir=$dst
test -d "$dstdir"
dstdir_status=$?
# Don't chown directories that already exist.
if test $dstdir_status = 0; then
chowncmd=""
fi
else
# Waiting for this to be detected by the "$cpprog $src $dsttmp" command
# might cause directories to be created, which would be especially bad
# if $src (and thus $dsttmp) contains '*'.
if test ! -f "$src" && test ! -d "$src"; then
echo "$0: $src does not exist." >&2
exit 1
fi
if test -z "$dst_arg"; then
echo "$0: no destination specified." >&2
exit 1
fi
dst=$dst_arg
# If destination is a directory, append the input filename.
if test -d "$dst"; then
if test "$is_target_a_directory" = never; then
echo "$0: $dst_arg: Is a directory" >&2
exit 1
fi
dstdir=$dst
dstbase=`basename "$src"`
case $dst in
*/) dst=$dst$dstbase;;
*) dst=$dst/$dstbase;;
esac
dstdir_status=0
else
dstdir=`dirname "$dst"`
test -d "$dstdir"
dstdir_status=$?
fi
fi
case $dstdir in
*/) dstdirslash=$dstdir;;
*) dstdirslash=$dstdir/;;
esac
obsolete_mkdir_used=false
if test $dstdir_status != 0; then
case $posix_mkdir in
'')
# With -d, create the new directory with the user-specified mode.
# Otherwise, rely on $mkdir_umask.
if test -n "$dir_arg"; then
mkdir_mode=-m$mode
else
mkdir_mode=
fi
posix_mkdir=false
# The $RANDOM variable is not portable (e.g., dash). Use it
# here however when possible just to lower collision chance.
tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$
trap '
ret=$?
rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" 2>/dev/null
exit $ret
' 0
# Because "mkdir -p" follows existing symlinks and we likely work
# directly in world-writeable /tmp, make sure that the '$tmpdir'
# directory is successfully created first before we actually test
# 'mkdir -p'.
if (umask $mkdir_umask &&
$mkdirprog $mkdir_mode "$tmpdir" &&
exec $mkdirprog $mkdir_mode -p -- "$tmpdir/a/b") >/dev/null 2>&1
then
if test -z "$dir_arg" || {
# Check for POSIX incompatibilities with -m.
# HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or
# other-writable bit of parent directory when it shouldn't.
# FreeBSD 6.1 mkdir -m -p sets mode of existing directory.
test_tmpdir="$tmpdir/a"
ls_ld_tmpdir=`ls -ld "$test_tmpdir"`
case $ls_ld_tmpdir in
d????-?r-*) different_mode=700;;
d????-?--*) different_mode=755;;
*) false;;
esac &&
$mkdirprog -m$different_mode -p -- "$test_tmpdir" && {
ls_ld_tmpdir_1=`ls -ld "$test_tmpdir"`
test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1"
}
}
then posix_mkdir=:
fi
rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir"
else
# Remove any dirs left behind by ancient mkdir implementations.
rmdir ./$mkdir_mode ./-p ./-- "$tmpdir" 2>/dev/null
fi
trap '' 0;;
esac
if
$posix_mkdir && (
umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir"
)
then :
else
# mkdir does not conform to POSIX,
# or it failed possibly due to a race condition. Create the
# directory the slow way, step by step, checking for races as we go.
case $dstdir in
/*) prefix='/';;
[-=\(\)!]*) prefix='./';;
*) prefix='';;
esac
oIFS=$IFS
IFS=/
set -f
set fnord $dstdir
shift
set +f
IFS=$oIFS
prefixes=
for d
do
test X"$d" = X && continue
prefix=$prefix$d
if test -d "$prefix"; then
prefixes=
else
if $posix_mkdir; then
(umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break
# Don't fail if two instances are running concurrently.
test -d "$prefix" || exit 1
else
case $prefix in
*\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;;
*) qprefix=$prefix;;
esac
prefixes="$prefixes '$qprefix'"
fi
fi
prefix=$prefix/
done
if test -n "$prefixes"; then
# Don't fail if two instances are running concurrently.
(umask $mkdir_umask &&
eval "\$doit_exec \$mkdirprog $prefixes") ||
test -d "$dstdir" || exit 1
obsolete_mkdir_used=true
fi
fi
fi
if test -n "$dir_arg"; then
{ test -z "$chowncmd" || $doit $chowncmd "$dst"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } &&
{ test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false ||
test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1
else
# Make a couple of temp file names in the proper directory.
dsttmp=${dstdirslash}_inst.$$_
rmtmp=${dstdirslash}_rm.$$_
# Trap to clean up those temp files at exit.
trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0
# Copy the file name to the temp name.
(umask $cp_umask &&
{ test -z "$stripcmd" || {
# Create $dsttmp read-write so that cp doesn't create it read-only,
# which would cause strip to fail.
if test -z "$doit"; then
: >"$dsttmp" # No need to fork-exec 'touch'.
else
$doit touch "$dsttmp"
fi
}
} &&
$doit_exec $cpprog "$src" "$dsttmp") &&
# and set any options; do chmod last to preserve setuid bits.
#
# If any of these fail, we abort the whole thing. If we want to
# ignore errors from any of these, just make sure not to ignore
# errors from the above "$doit $cpprog $src $dsttmp" command.
#
{ test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } &&
{ test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } &&
{ test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } &&
# If -C, don't bother to copy if it wouldn't change the file.
if $copy_on_change &&
old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` &&
new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` &&
set -f &&
set X $old && old=:$2:$4:$5:$6 &&
set X $new && new=:$2:$4:$5:$6 &&
set +f &&
test "$old" = "$new" &&
$cmpprog "$dst" "$dsttmp" >/dev/null 2>&1
then
rm -f "$dsttmp"
else
# If $backupsuffix is set, and the file being installed
# already exists, attempt a backup. Don't worry if it fails,
# e.g., if mv doesn't support -f.
if test -n "$backupsuffix" && test -f "$dst"; then
$doit $mvcmd -f "$dst" "$dst$backupsuffix" 2>/dev/null
fi
# Rename the file to the real destination.
$doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null ||
# The rename failed, perhaps because mv can't rename something else
# to itself, or perhaps because mv is so ancient that it does not
# support -f.
{
# Now remove or move aside any old file at destination location.
# We try this two ways since rm can't unlink itself on some
# systems and the destination file might be busy for other
# reasons. In this case, the final cleanup might fail but the new
# file should still install successfully.
{
test ! -f "$dst" ||
$doit $rmcmd "$dst" 2>/dev/null ||
{ $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null &&
{ $doit $rmcmd "$rmtmp" 2>/dev/null; :; }
} ||
{ echo "$0: cannot unlink or rename $dst" >&2
(exit 1); exit 1
}
} &&
# Now rename the file to the real destination.
$doit $mvcmd "$dsttmp" "$dst"
}
fi || exit 1
trap '' 0
fi
done

207
missing Executable file
View File

@ -0,0 +1,207 @@
#! /bin/sh
# Common wrapper for a few potentially missing GNU programs.
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1996-2021 Free Software Foundation, Inc.
# Originally written by Fran,cois Pinard <pinard@iro.umontreal.ca>, 1996.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
if test $# -eq 0; then
echo 1>&2 "Try '$0 --help' for more information"
exit 1
fi
case $1 in
--is-lightweight)
# Used by our autoconf macros to check whether the available missing
# script is modern enough.
exit 0
;;
--run)
# Back-compat with the calling convention used by older automake.
shift
;;
-h|--h|--he|--hel|--help)
echo "\
$0 [OPTION]... PROGRAM [ARGUMENT]...
Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due
to PROGRAM being missing or too old.
Options:
-h, --help display this help and exit
-v, --version output version information and exit
Supported PROGRAM values:
aclocal autoconf autoheader autom4te automake makeinfo
bison yacc flex lex help2man
Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and
'g' are ignored when checking the name.
Send bug reports to <bug-automake@gnu.org>."
exit $?
;;
-v|--v|--ve|--ver|--vers|--versi|--versio|--version)
echo "missing $scriptversion (GNU Automake)"
exit $?
;;
-*)
echo 1>&2 "$0: unknown '$1' option"
echo 1>&2 "Try '$0 --help' for more information"
exit 1
;;
esac
# Run the given program, remember its exit status.
"$@"; st=$?
# If it succeeded, we are done.
test $st -eq 0 && exit 0
# Also exit now if we it failed (or wasn't found), and '--version' was
# passed; such an option is passed most likely to detect whether the
# program is present and works.
case $2 in --version|--help) exit $st;; esac
# Exit code 63 means version mismatch. This often happens when the user
# tries to use an ancient version of a tool on a file that requires a
# minimum version.
if test $st -eq 63; then
msg="probably too old"
elif test $st -eq 127; then
# Program was missing.
msg="missing on your system"
else
# Program was found and executed, but failed. Give up.
exit $st
fi
perl_URL=https://www.perl.org/
flex_URL=https://github.com/westes/flex
gnu_software_URL=https://www.gnu.org/software
program_details ()
{
case $1 in
aclocal|automake)
echo "The '$1' program is part of the GNU Automake package:"
echo "<$gnu_software_URL/automake>"
echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/autoconf>"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
autoconf|autom4te|autoheader)
echo "The '$1' program is part of the GNU Autoconf package:"
echo "<$gnu_software_URL/autoconf/>"
echo "It also requires GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
esac
}
give_advice ()
{
# Normalize program name to check for.
normalized_program=`echo "$1" | sed '
s/^gnu-//; t
s/^gnu//; t
s/^g//; t'`
printf '%s\n' "'$1' is $msg."
configure_deps="'configure.ac' or m4 files included by 'configure.ac'"
case $normalized_program in
autoconf*)
echo "You should only need it if you modified 'configure.ac',"
echo "or m4 files included by it."
program_details 'autoconf'
;;
autoheader*)
echo "You should only need it if you modified 'acconfig.h' or"
echo "$configure_deps."
program_details 'autoheader'
;;
automake*)
echo "You should only need it if you modified 'Makefile.am' or"
echo "$configure_deps."
program_details 'automake'
;;
aclocal*)
echo "You should only need it if you modified 'acinclude.m4' or"
echo "$configure_deps."
program_details 'aclocal'
;;
autom4te*)
echo "You might have modified some maintainer files that require"
echo "the 'autom4te' program to be rebuilt."
program_details 'autom4te'
;;
bison*|yacc*)
echo "You should only need it if you modified a '.y' file."
echo "You may want to install the GNU Bison package:"
echo "<$gnu_software_URL/bison/>"
;;
lex*|flex*)
echo "You should only need it if you modified a '.l' file."
echo "You may want to install the Fast Lexical Analyzer package:"
echo "<$flex_URL>"
;;
help2man*)
echo "You should only need it if you modified a dependency" \
"of a man page."
echo "You may want to install the GNU Help2man package:"
echo "<$gnu_software_URL/help2man/>"
;;
makeinfo*)
echo "You should only need it if you modified a '.texi' file, or"
echo "any other file indirectly affecting the aspect of the manual."
echo "You might want to install the Texinfo package:"
echo "<$gnu_software_URL/texinfo/>"
echo "The spurious makeinfo call might also be the consequence of"
echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might"
echo "want to install GNU make:"
echo "<$gnu_software_URL/make/>"
;;
*)
echo "You might have modified some files without having the proper"
echo "tools for further handling them. Check the 'README' file, it"
echo "often tells you about the needed prerequisites for installing"
echo "this package. You may also peek at any GNU archive site, in"
echo "case some other package contains this missing '$1' program."
;;
esac
}
give_advice "$1" | sed -e '1s/^/WARNING: /' \
-e '2,$s/^/ /' >&2
# Propagate the correct exit status (expected to be 127 for a program
# not found, 63 for a program that failed due to version mismatch).
exit $st

View File

@ -1,6 +1,8 @@
#include "cache.h"
#include "config.h"
#include "log.h"
#include "util.h"
#include <sys/stat.h>
@ -44,14 +46,14 @@ static char *CacheSystem_calc_dir(const char *url)
xdg_cache_home = path_append(home, xdg_cache_home_default);
}
if (mkdir
(xdg_cache_home, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
(xdg_cache_home, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
lprintf(fatal, "mkdir(): %s\n", strerror(errno));
}
char *cache_dir_root = path_append(xdg_cache_home, "/httpdirfs/");
if (mkdir
(cache_dir_root, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
(cache_dir_root, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
lprintf(fatal, "mkdir(): %s\n", strerror(errno));
}
@ -75,7 +77,7 @@ static char *CacheSystem_calc_dir(const char *url)
char *escaped_url = curl_easy_escape(c, url, 0);
char *full_path = path_append(cache_dir_root, escaped_url);
if (mkdir(full_path, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
&& (errno != EEXIST)) {
lprintf(fatal, "mkdir(): %s\n", strerror(errno));
}
FREE(fn);
@ -87,6 +89,8 @@ static char *CacheSystem_calc_dir(const char *url)
void CacheSystem_init(const char *path, int url_supplied)
{
lprintf(cache_lock_debug,
"thread %x: initialise cf_lock;\n", pthread_self());
if (pthread_mutex_init(&cf_lock, NULL)) {
lprintf(fatal, "cf_lock initialisation failed!\n");
}
@ -103,12 +107,12 @@ void CacheSystem_init(const char *path, int url_supplied)
* Check if directories exist, if not, create them
*/
if (mkdir(META_DIR, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
&& (errno != EEXIST)) {
lprintf(fatal, "mkdir(): %s\n", strerror(errno));
}
if (mkdir(DATA_DIR, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
&& (errno != EEXIST)) {
lprintf(fatal, "mkdir(): %s\n", strerror(errno));
}
@ -119,8 +123,8 @@ void CacheSystem_init(const char *path, int url_supplied)
*/
sonic_path = path_append(META_DIR, "rest/");
if (mkdir
(sonic_path, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
(sonic_path, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
lprintf(fatal, "mkdir(): %s\n", strerror(errno));
}
FREE(sonic_path);
@ -130,8 +134,8 @@ void CacheSystem_init(const char *path, int url_supplied)
*/
sonic_path = path_append(DATA_DIR, "rest/");
if (mkdir
(sonic_path, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
(sonic_path, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)
&& (errno != EEXIST)) {
lprintf(fatal, "mkdir(): %s\n", strerror(errno));
}
FREE(sonic_path);
@ -144,7 +148,7 @@ void CacheSystem_init(const char *path, int url_supplied)
* \brief read a metadata file
* \return 0 on success, errno on error.
*/
static int Meta_read(Cache * cf)
static int Meta_read(Cache *cf)
{
FILE *fp = cf->mfp;
rewind(fp);
@ -159,16 +163,12 @@ static int Meta_read(Cache * cf)
return EIO;
}
fread(&cf->time, sizeof(long), 1, fp);
fread(&cf->content_length, sizeof(off_t), 1, fp);
fread(&cf->blksz, sizeof(int), 1, fp);
fread(&cf->segbc, sizeof(long), 1, fp);
/*
* Error checking for fread
*/
if (ferror(fp)) {
lprintf(error, "error reading core metadata!\n");
if ( 1 != fread(&cf->time, sizeof(long), 1, fp) ||
1 != fread(&cf->content_length, sizeof(off_t), 1, fp) ||
1 != fread(&cf->blksz, sizeof(int), 1, fp) ||
1 != fread(&cf->segbc, sizeof(long), 1, fp) ||
ferror(fp) ) {
lprintf(error, "error reading core metadata %s!\n", cf->path);
return EIO;
}
@ -232,7 +232,7 @@ file!\n");
* - -1 on error,
* - 0 on success
*/
static int Meta_write(Cache * cf)
static int Meta_write(Cache *cf)
{
FILE *fp = cf->mfp;
rewind(fp);
@ -275,7 +275,7 @@ static int Meta_write(Cache * cf)
* \details We use sparse creation here
* \return exit on failure
*/
static void Data_create(Cache * cf)
static void Data_create(Cache *cf)
{
int fd;
int mode;
@ -322,7 +322,7 @@ static long Data_size(const char *fn)
* - negative values on error,
* - otherwise, the number of bytes read.
*/
static long Data_read(Cache * cf, uint8_t * buf, off_t len, off_t offset)
static long Data_read(Cache *cf, uint8_t *buf, off_t len, off_t offset)
{
if (len == 0) {
lprintf(error, "requested to read 0 byte!\n");
@ -375,7 +375,7 @@ static long Data_read(Cache * cf, uint8_t * buf, off_t len, off_t offset)
}
}
end:
end:
lprintf(cache_lock_debug,
"thread %x: unlocking seek_lock;\n", pthread_self());
@ -393,7 +393,7 @@ static long Data_read(Cache * cf, uint8_t * buf, off_t len, off_t offset)
* - -1 when the data file does not exist
* - otherwise, the number of bytes written.
*/
static long Data_write(Cache * cf, const uint8_t * buf, off_t len,
static long Data_write(Cache *cf, const uint8_t *buf, off_t len,
off_t offset)
{
if (len == 0) {
@ -433,7 +433,7 @@ static long Data_write(Cache * cf, const uint8_t * buf, off_t len,
lprintf(error, "fwrite(): encountered error!\n");
}
end:
end:
lprintf(cache_lock_debug,
"thread %x: unlocking seek_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&cf->seek_lock);
@ -495,7 +495,7 @@ static Cache *Cache_alloc()
/**
* \brief free a cache data structure
*/
static void Cache_free(Cache * cf)
static void Cache_free(Cache *cf)
{
if (pthread_mutex_destroy(&cf->seek_lock)) {
lprintf(fatal, "could not destroy seek_lock!\n");
@ -541,7 +541,9 @@ static void Cache_free(Cache * cf)
static int Cache_exist(const char *fn)
{
char *metafn = path_append(META_DIR, fn);
lprintf(debug, "metafn: %s\n", metafn);
char *datafn = path_append(DATA_DIR, fn);
lprintf(debug, "datafn: %s\n", datafn);
/*
* access() returns 0 on success
*/
@ -549,15 +551,17 @@ static int Cache_exist(const char *fn)
int no_data = access(datafn, F_OK);
if (no_meta ^ no_data) {
lprintf(warning, "Cache file partially missing.\n");
if (no_meta) {
lprintf(warning, "Cache file partially missing.\n");
lprintf(debug, "Unlinking datafn: %s\n", datafn);
if (unlink(datafn)) {
lprintf(error, "unlink(): %s\n", strerror(errno));
lprintf(fatal, "unlink(): %s\n", strerror(errno));
}
}
if (no_data) {
lprintf(debug, "Unlinking metafn: %s\n", metafn);
if (unlink(metafn)) {
lprintf(error, "unlink(): %s\n", strerror(errno));
lprintf(fatal, "unlink(): %s\n", strerror(errno));
}
}
}
@ -575,7 +579,7 @@ void Cache_delete(const char *fn)
{
if (CONFIG.mode == SONIC) {
Link *link = path_to_Link(fn);
fn = link->sonic_id;
fn = link->sonic.id;
}
char *metafn = path_append(META_DIR, fn);
@ -601,18 +605,19 @@ void Cache_delete(const char *fn)
* - 0 on success
* - -1 on failure, with appropriate errno set.
*/
static int Data_open(Cache * cf)
static int Data_open(Cache *cf)
{
char *datafn = path_append(DATA_DIR, cf->path);
cf->dfp = fopen(datafn, "r+");
FREE(datafn);
if (!cf->dfp) {
/*
* Failed to open the data file
*/
lprintf(error, "fopen(%s): %s\n", datafn, strerror(errno));
FREE(datafn);
return -1;
}
FREE(datafn);
return 0;
}
@ -622,7 +627,7 @@ static int Data_open(Cache * cf)
* - 0 on success
* - -1 on failure, with appropriate errno set.
*/
static int Meta_open(Cache * cf)
static int Meta_open(Cache *cf)
{
char *metafn = path_append(META_DIR, cf->path);
cf->mfp = fopen(metafn, "r+");
@ -642,7 +647,7 @@ static int Meta_open(Cache * cf)
* \brief Create a metafile
* \return exit on error
*/
static void Meta_create(Cache * cf)
static void Meta_create(Cache *cf)
{
char *metafn = path_append(META_DIR, cf->path);
cf->mfp = fopen(metafn, "w");
@ -652,6 +657,11 @@ static void Meta_create(Cache * cf)
*/
lprintf(fatal, "fopen(%s): %s\n", metafn, strerror(errno));
}
if (fclose(cf->mfp)) {
lprintf(error,
"cannot close metadata after creation: %s.\n",
strerror(errno));
}
FREE(metafn);
}
@ -660,6 +670,7 @@ int Cache_create(const char *path)
Link *this_link = path_to_Link(path);
char *fn = "__UNINITIALISED__";
if (CONFIG.mode == NORMAL) {
fn = curl_easy_unescape(NULL,
this_link->f_url + ROOT_LINK_OFFSET, 0,
@ -667,7 +678,7 @@ int Cache_create(const char *path)
} else if (CONFIG.mode == SINGLE) {
fn = curl_easy_unescape(NULL, this_link->linkname, 0, NULL);
} else if (CONFIG.mode == SONIC) {
fn = this_link->sonic_id;
fn = this_link->sonic.id;
} else {
lprintf(fatal, "Invalid CONFIG.mode\n");
}
@ -683,12 +694,6 @@ int Cache_create(const char *path)
Meta_create(cf);
if (fclose(cf->mfp)) {
lprintf(error,
"cannot close metadata after creation: %s.\n",
strerror(errno));
}
if (Meta_open(cf)) {
Cache_free(cf);
lprintf(error, "cannot open metadata file, %s.\n", fn);
@ -710,8 +715,14 @@ int Cache_create(const char *path)
int res = Cache_exist(fn);
if (res) {
lprintf(fatal, "Cache file creation failed for %s\n", path);
}
if (CONFIG.mode == NORMAL) {
curl_free(fn);
} else if (CONFIG.mode == SONIC) {
curl_free(fn);
}
return res;
@ -734,8 +745,8 @@ Cache *Cache_open(const char *fn)
"thread %x: locking cf_lock;\n", pthread_self());
PTHREAD_MUTEX_LOCK(&cf_lock);
if (link->cache_opened) {
link->cache_opened++;
if (link->cache_ptr) {
link->cache_ptr->cache_opened++;
lprintf(cache_lock_debug,
"thread %x: unlocking cf_lock;\n", pthread_self());
@ -755,7 +766,7 @@ Cache *Cache_open(const char *fn)
return NULL;
}
} else if (CONFIG.mode == SONIC) {
if (Cache_exist(link->sonic_id)) {
if (Cache_exist(link->sonic.id)) {
lprintf(cache_lock_debug,
"thread %x: unlocking cf_lock;\n", pthread_self());
@ -781,7 +792,7 @@ Cache *Cache_open(const char *fn)
* Set the path for the local cache file, if we are in sonic mode
*/
if (CONFIG.mode == SONIC) {
fn = link->sonic_id;
fn = link->sonic.id;
}
cf->path = strndup(fn, MAX_PATH_LEN);
@ -821,7 +832,8 @@ Cache *Cache_open(const char *fn)
*/
if (cf->content_length > Data_size(fn)) {
lprintf(error, "metadata inconsistency %s, \
cf->content_length: %ld, Data_size(fn): %ld.\n", fn, cf->content_length, Data_size(fn));
cf->content_length: %ld, Data_size(fn): %ld.\n", fn, cf->content_length,
Data_size(fn));
Cache_free(cf);
lprintf(cache_lock_debug,
@ -853,7 +865,7 @@ cf->content_length: %ld, Data_size(fn): %ld.\n", fn, cf->content_length, Data_si
return NULL;
}
cf->link->cache_opened = 1;
cf->cache_opened = 1;
/*
* Yup, we just created a circular loop. ;)
*/
@ -865,15 +877,15 @@ cf->content_length: %ld, Data_size(fn): %ld.\n", fn, cf->content_length, Data_si
return cf;
}
void Cache_close(Cache * cf)
void Cache_close(Cache *cf)
{
lprintf(cache_lock_debug,
"thread %x: locking cf_lock;\n", pthread_self());
PTHREAD_MUTEX_LOCK(&cf_lock);
cf->link->cache_opened--;
cf->cache_opened--;
if (cf->link->cache_opened > 0) {
if (cf->cache_opened > 0) {
lprintf(cache_lock_debug,
"thread %x: unlocking cf_lock;\n", pthread_self());
@ -893,6 +905,8 @@ void Cache_close(Cache * cf)
lprintf(error, "cannot close data file %s.\n", strerror(errno));
}
cf->link->cache_ptr = NULL;
lprintf(cache_lock_debug,
"thread %x: unlocking cf_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&cf_lock);
@ -903,7 +917,7 @@ void Cache_close(Cache * cf)
* \brief Check if a segment exists.
* \return 1 if the segment exists
*/
static int Seg_exist(Cache * cf, off_t offset)
static int Seg_exist(Cache *cf, off_t offset)
{
off_t byte = offset / cf->blksz;
return cf->seg[byte];
@ -916,7 +930,7 @@ static int Seg_exist(Cache * cf, off_t offset)
* \param[in] i 1 for exist, 0 for doesn't exist
* \note Call this after downloading a segment.
*/
static void Seg_set(Cache * cf, off_t offset, int i)
static void Seg_set(Cache *cf, off_t offset, int i)
{
off_t byte = offset / cf->blksz;
cf->seg[byte] = i;
@ -938,16 +952,16 @@ static void *Cache_bgdl(void *arg)
uint8_t *recv_buf = CALLOC(cf->blksz, sizeof(uint8_t));
lprintf(debug, "thread %x spawned.\n ", pthread_self());
long recv = path_download(cf->fs_path, (char *) recv_buf, cf->blksz,
long recv = Link_download(cf->link, (char *) recv_buf, cf->blksz,
cf->next_dl_offset);
if (recv < 0) {
lprintf(error, "thread %x received %ld bytes, \
which does't make sense\n", pthread_self(), recv);
which doesn't make sense\n", pthread_self(), recv);
}
if ((recv == cf->blksz) ||
(cf->next_dl_offset ==
(cf->content_length / cf->blksz * cf->blksz))) {
(cf->next_dl_offset ==
(cf->content_length / cf->blksz * cf->blksz))) {
Data_write(cf, recv_buf, recv, cf->next_dl_offset);
Seg_set(cf, cf->next_dl_offset, 1);
} else {
@ -965,12 +979,14 @@ error.\n", recv, cf->blksz);
"thread %x: unlocking w_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&cf->w_lock);
pthread_detach(pthread_self());
if (pthread_detach(pthread_self())) {
lprintf(error, "%s\n", strerror(errno));
};
pthread_exit(NULL);
}
long
Cache_read(Cache * cf, char *const output_buf, const off_t len,
Cache_read(Cache *cf, char *const output_buf, const off_t len,
const off_t offset_start)
{
long send;
@ -1017,11 +1033,11 @@ Cache_read(Cache * cf, char *const output_buf, const off_t len,
uint8_t *recv_buf = CALLOC(cf->blksz, sizeof(uint8_t));
lprintf(debug, "thread %x: spawned.\n ", pthread_self());
long recv = path_download(cf->fs_path, (char *) recv_buf, cf->blksz,
long recv = Link_download(cf->link, (char *) recv_buf, cf->blksz,
dl_offset);
if (recv < 0) {
lprintf(error, "thread %x received %ld bytes, \
which does't make sense\n", pthread_self(), recv);
which doesn't make sense\n", pthread_self(), recv);
}
/*
* check if we have received enough data, write it to the disk
@ -1030,7 +1046,7 @@ which does't make sense\n", pthread_self(), recv);
* Condition 2: offset is the last segment
*/
if ((recv == cf->blksz) ||
(dl_offset == (cf->content_length / cf->blksz * cf->blksz))) {
(dl_offset == (cf->content_length / cf->blksz * cf->blksz))) {
Data_write(cf, recv_buf, recv, dl_offset);
Seg_set(cf, dl_offset, 1);
} else {
@ -1047,12 +1063,11 @@ error.\n", recv, cf->blksz);
/*
* ----------- Download the next segment in background -----------------
*/
bgdl:
{
bgdl: {
}
off_t next_dl_offset = round_div(offset_start, cf->blksz) * cf->blksz;
if ((next_dl_offset > dl_offset) && !Seg_exist(cf, next_dl_offset)
&& next_dl_offset < cf->content_length) {
&& next_dl_offset < cf->content_length) {
/*
* Stop the spawning of multiple background pthreads
*/

View File

@ -9,14 +9,14 @@
* separate folders.
*/
#include <pthread.h>
/**
* \brief cache data type
*/
typedef struct Cache Cache;
#include "link.h"
#include "network.h"
#include <stdio.h>
#include <stdint.h>
#include <pthread.h>
/**
* \brief Type definition for a cache segment
@ -27,6 +27,9 @@ typedef uint8_t Seg;
* \brief cache data type in-memory data structure
*/
struct Cache {
/** \brief How many times the cache has been opened */
int cache_opened;
/** \brief the FILE pointer for the data file*/
FILE *dfp;
/** \brief the FILE pointer for the metadata */
@ -110,7 +113,7 @@ Cache *Cache_open(const char *fn);
* \brief Close a cache data structure
* \note This function is called by fs_release()
*/
void Cache_close(Cache * cf);
void Cache_close(Cache *cf);
/**
* \brief create a cache file set if it doesn't exist already
@ -138,6 +141,6 @@ void Cache_delete(const char *fn);
* \return the length of the segment the cache system managed to obtain.
* \note Called by fs_read(), verified to be working
*/
long Cache_read(Cache * cf, char *const output_buf, const off_t len,
long Cache_read(Cache *cf, char *const output_buf, const off_t len,
const off_t offset_start);
#endif

View File

@ -53,6 +53,8 @@ void Config_init(void)
CONFIG.insecure_tls = 0;
CONFIG.refresh_timeout = DEFAULT_REFRESH_TIMEOUT;
/*--------------- Cache related ---------------*/
CONFIG.cache_enabled = 0;
@ -70,7 +72,4 @@ void Config_init(void)
CONFIG.sonic_id3 = 0;
CONFIG.sonic_insecure = 0;
/*---------- Print version number -----------*/
print_version();
}

View File

@ -23,6 +23,11 @@
*/
#define DEFAULT_NETWORK_MAX_CONNS 10
/**
* \brief The default refresh_timeout
*/
#define DEFAULT_REFRESH_TIMEOUT 3600
/**
* \brief Operation modes
*/
@ -53,6 +58,8 @@ typedef struct {
char *proxy_username;
/** \brief HTTP proxy password */
char *proxy_password;
/** \brief HTTP proxy certificate file */
char *proxy_cafile;
/** \brief HTTP maximum connection count */
long max_conns;
/** \brief HTTP user agent*/
@ -63,6 +70,10 @@ typedef struct {
int no_range_check;
/** \brief Disable TLS certificate verification */
int insecure_tls;
/** \brief Server certificate file */
char *cafile;
/** \brief Refresh directory listing after refresh_timeout seconds*/
int refresh_timeout;
/*--------------- Cache related ---------------*/
/** \brief Whether cache mode is enabled */
int cache_enabled;

View File

@ -1,6 +1,6 @@
#include "fuse_local.h"
#include "cache.h"
#include "link.h"
#include "log.h"
/*
@ -44,7 +44,7 @@ static int fs_getattr(const char *path, struct stat *stbuf)
if (!link) {
return -ENOENT;
}
struct timespec spec;
struct timespec spec = { 0 };
spec.tv_sec = link->time;
#if defined(__APPLE__) && defined(__MACH__)
stbuf->st_mtimespec = spec;
@ -95,23 +95,29 @@ static int fs_open(const char *path, struct fuse_file_info *fi)
if (!link) {
return -ENOENT;
}
if ((fi->flags & 3) != O_RDONLY) {
return -EACCES;
lprintf(debug, "%s found.\n", path);
if ((fi->flags & O_RDWR) != O_RDONLY) {
return -EROFS;
}
if (CACHE_SYSTEM_INIT) {
lprintf(debug, "Cache_open(%s);\n", path);
fi->fh = (uint64_t) Cache_open(path);
if (!fi->fh) {
/*
* The link clearly exists, the cache cannot be opened, attempt
* cache creation
*/
lprintf(debug, "Cache_delete(%s);\n", path);
Cache_delete(path);
lprintf(debug, "Cache_create(%s);\n", path);
Cache_create(path);
lprintf(debug, "Cache_open(%s);\n", path);
fi->fh = (uint64_t) Cache_open(path);
/*
* The cache definitely cannot be opened for some reason.
*/
if (!fi->fh) {
lprintf(fatal, "Cache file creation failure for %s.\n", path);
return -ENOENT;
}
}
@ -138,15 +144,18 @@ fs_readdir(const char *path, void *buf, fuse_fill_dir_t dir_add,
(void) fi;
LinkTable *linktbl;
if (!strcmp(path, "/")) {
linktbl = ROOT_LINK_TBL;
} else {
linktbl = path_to_Link_LinkTable_new(path);
if (!linktbl) {
return -ENOENT;
}
#ifdef DEBUG
static int j = 0;
lprintf(debug, "!!!!Calling fs_readdir for the %d time!!!!\n", j);
j++;
#endif
linktbl = path_to_Link_LinkTable_new(path);
if (!linktbl) {
return -ENOENT;
}
/*
* start adding the links
*/

File diff suppressed because it is too large Load Diff

View File

@ -5,16 +5,20 @@
* \file link.h
* \brief link related structures and functions
*/
#include "config.h"
#include "util.h"
#include <curl/curl.h>
/** \brief Link type */
typedef struct Link Link;
typedef struct LinkTable LinkTable;
#include "cache.h"
#include "config.h"
#include "network.h"
#include "sonic.h"
/** \brief the link type */
#include <curl/curl.h>
/**
* \brief the link type
*/
typedef enum {
LINK_HEAD = 'H',
LINK_DIR = 'D',
@ -23,30 +27,15 @@ typedef enum {
LINK_UNINITIALISED_FILE = 'U'
} LinkType;
/** \brief for storing downloaded data in memory */
typedef struct {
char *data;
size_t size;
} DataStruct;
/** \brief specify the type of data transfer */
typedef enum {
FILESTAT = 's',
DATA = 'd'
} TransferType;
/** \brief for storing the link being transferred, and metadata */
typedef struct {
TransferType type;
int transferring;
Link *link;
} TransferStruct;
/**
* \brief link table type
* \details index 0 contains the Link for the base URL
*/
typedef struct LinkTable LinkTable;
struct LinkTable {
int num;
time_t index_time;
Link **links;
};
/**
* \brief Link type data structure
@ -54,6 +43,8 @@ typedef struct LinkTable LinkTable;
struct Link {
/** \brief The link name in the last level of the URL */
char linkname[MAX_FILENAME_LEN + 1];
/** \brief This is for storing the unescaped path */
char linkpath[MAX_FILENAME_LEN + 1];
/** \brief The full URL of the file */
char f_url[MAX_PATH_LEN + 1];
/** \brief The type of the link */
@ -64,31 +55,10 @@ struct Link {
LinkTable *next_table;
/** \brief CURLINFO_FILETIME obtained from the server */
long time;
/** \brief How many times associated cache has been opened */
int cache_opened;
/** \brief The pointer associated with the cache file */
Cache *cache_ptr;
/**
* \brief Sonic id field
* \details This is used to store the followings:
* - Arist ID
* - Album ID
* - Song ID
* - Sub-directory ID (in the XML response, this is the ID on the "child"
* element)
*/
char *sonic_id;
/**
* \brief Sonic directory depth
* \details This is used exclusively in ID3 mode to store the depth of the
* current directory.
*/
int sonic_depth;
};
struct LinkTable {
int num;
Link **links;
/** \brief Stores *sonic related data */
Sonic sonic;
};
/**
@ -109,7 +79,7 @@ LinkTable *LinkSystem_init(const char *raw_url);
/**
* \brief Set the stats of a link, after curl multi handle finished querying
*/
void Link_set_file_stat(Link * this_link, CURL * curl);
void Link_set_file_stat(Link *this_link, CURL *curl);
/**
* \brief create a new LinkTable
@ -117,12 +87,19 @@ void Link_set_file_stat(Link * this_link, CURL * curl);
LinkTable *LinkTable_new(const char *url);
/**
* \brief download a link
* \brief download a path
* \return the number of bytes downloaded
*/
long path_download(const char *path, char *output_buf, size_t size,
off_t offset);
/**
* \brief Download a Link
* \return the number of bytes downloaded
*/
long Link_download(Link *link, char *output_buf, size_t req_size,
off_t offset);
/**
* \brief find the link associated with a path
*/
@ -136,18 +113,19 @@ LinkTable *path_to_Link_LinkTable_new(const char *path);
/**
* \brief dump a link table to the disk.
*/
int LinkTable_disk_save(LinkTable * linktbl, const char *dirn);
int LinkTable_disk_save(LinkTable *linktbl, const char *dirn);
/**
* \brief load a link table from the disk.
* \param[in] dirn We expected the unescaped_path here!
*/
LinkTable *LinkTable_disk_open(const char *dirn);
/**
* \brief Download a link's content to the memory
* \warning You MUST free the memory field in DataStruct after use!
* \warning You MUST free the memory field in TransferStruct after use!
*/
DataStruct Link_to_DataStruct(Link * head_link);
TransferStruct Link_download_full(Link *head_link);
/**
* \brief Allocate a LinkTable
@ -158,15 +136,15 @@ LinkTable *LinkTable_alloc(const char *url);
/**
* \brief free a LinkTable
*/
void LinkTable_free(LinkTable * linktbl);
void LinkTable_free(LinkTable *linktbl);
/**
* \brief print a LinkTable
*/
void LinkTable_print(LinkTable * linktbl);
void LinkTable_print(LinkTable *linktbl);
/**
* \brief add a Link to a LinkTable
*/
void LinkTable_add(LinkTable * linktbl, Link * link);
void LinkTable_add(LinkTable *linktbl, Link *link);
#endif

View File

@ -15,7 +15,11 @@ int log_level_init()
if (env) {
return atoi(env);
}
#ifdef DEBUG
return DEFAULT_LOG_LEVEL | debug;
#else
return DEFAULT_LOG_LEVEL;
#endif
}
void
@ -36,14 +40,17 @@ log_printf(LogType type, const char *file, const char *func, int line,
case info:
goto print_actual_message;
default:
fprintf(stderr, "Debug(%x):", type);
fprintf(stderr, "Debug");
if (type != debug) {
fprintf(stderr, "(%x)", type);
}
fprintf(stderr, ":");
break;
}
fprintf(stderr, "%s:%d:", file, line);
print_actual_message:
{
print_actual_message: {
}
fprintf(stderr, "%s: ", func);
va_list args;
@ -60,10 +67,10 @@ log_printf(LogType type, const char *file, const char *func, int line,
void print_version()
{
/* FUSE prints its help to stderr */
lprintf(info, "HTTPDirFS version " VERSION "\n");
fprintf(stderr, "HTTPDirFS version " VERSION "\n");
/*
* --------- Print off SSL engine version ---------
*/
curl_version_info_data *data = curl_version_info(CURLVERSION_NOW);
lprintf(info, "libcurl SSL engine: %s\n", data->ssl_version);
fprintf(stderr, "libcurl SSL engine: %s\n", data->ssl_version);
}

View File

@ -13,6 +13,8 @@ typedef enum {
link_lock_debug = 1 << 5,
network_lock_debug = 1 << 6,
cache_lock_debug = 1 << 7,
memcache_debug = 1 << 8,
libcurl_debug = 1 << 9,
} LogType;
/**
@ -37,10 +39,11 @@ void log_printf(LogType type, const char *file, const char *func, int line,
* \details This macro automatically prints out the filename and line number
*/
#define lprintf(type, ...) \
log_printf(type, __FILE__, __func__, __LINE__, __VA_ARGS__);
#endif
log_printf(type, __FILE__, __func__, __LINE__, __VA_ARGS__)
/**
* \brief Print the version information for HTTPDirFS
*/
void print_version();
#endif

View File

@ -1,8 +1,7 @@
#include "config.h"
#include "cache.h"
#include "fuse_local.h"
#include "network.h"
#include "link.h"
#include "log.h"
#include "util.h"
#include <getopt.h>
#include <stdlib.h>
@ -90,7 +89,7 @@ int main(int argc, char **argv)
*/
char *base_url = argv[argc - 2];
if (strncmp(base_url, "http://", 7)
&& strncmp(base_url, "https://", 8)) {
&& strncmp(base_url, "https://", 8)) {
fprintf(stderr, "Error: Please supply a valid URL.\n");
print_help(argv[0], 0);
exit(EXIT_FAILURE);
@ -109,7 +108,7 @@ activate Sonic mode.\n");
}
}
fuse_start:
fuse_start:
fuse_local_init(fuse_argc, fuse_argv);
return 0;
@ -144,11 +143,11 @@ void parse_config_file(char ***argv, int *argc)
char *space;
space = strchr(buf, ' ');
if (!space) {
*argv = realloc(*argv, *argc * sizeof(char **));
*argv = realloc(*argv, *argc * sizeof(char *));
(*argv)[*argc - 1] = strndup(buf, buf_len);
} else {
(*argc)++;
*argv = realloc(*argv, *argc * sizeof(char **));
*argv = realloc(*argv, *argc * sizeof(char *));
/*
* Only copy up to the space character
*/
@ -162,6 +161,7 @@ void parse_config_file(char ***argv, int *argc)
}
}
}
fclose(config);
}
FREE(full_path);
}
@ -169,7 +169,7 @@ void parse_config_file(char ***argv, int *argc)
static int
parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
{
char c;
int c;
int long_index = 0;
const char *short_opts = "o:hVdfsp:u:P:";
const struct option long_opts[] = {
@ -199,11 +199,14 @@ parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
{ "insecure-tls", no_argument, NULL, 'L' }, /* 20 */
{ "config", required_argument, NULL, 'L' }, /* 21 */
{ "single-file-mode", required_argument, NULL, 'L' }, /* 22 */
{ "cacert", required_argument, NULL, 'L' }, /* 23 */
{ "proxy-cacert", required_argument, NULL, 'L' }, /* 24 */
{ "refresh-timeout", required_argument, NULL, 'L' }, /* 25 */
{ 0, 0, 0, 0 }
};
while ((c =
getopt_long(argc, argv, short_opts, long_opts,
&long_index)) != -1) {
getopt_long(argc, argv, short_opts, long_opts,
&long_index)) != -1) {
switch (c) {
case 'o':
add_arg(fuse_argv, fuse_argc, "-o");
@ -217,11 +220,12 @@ parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
*/
return 1;
case 'V':
print_version(argv[0], 1);
print_version();
add_arg(fuse_argv, fuse_argc, "-V");
return 1;
case 'd':
add_arg(fuse_argv, fuse_argc, "-d");
CONFIG.log_type |= debug;
break;
case 'f':
add_arg(fuse_argv, fuse_argc, "-f");
@ -296,6 +300,15 @@ parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
case 22:
CONFIG.mode = SINGLE;
break;
case 23:
CONFIG.cafile = strdup(optarg);
break;
case 24:
CONFIG.proxy_cafile = strdup(optarg);
break;
case 25:
CONFIG.refresh_timeout = atoi(optarg);
break;
default:
fprintf(stderr, "see httpdirfs -h for usage\n");
return 1;
@ -347,9 +360,11 @@ HTTPDirFS options:\n\
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html\n\
--proxy-username Username for the proxy\n\
--proxy-password Password for the proxy\n\
--proxy-cacert Certificate authority for the proxy\n\
--cache Enable cache (default: off)\n\
--cache-location Set a custom cache location\n\
(default: \"${XDG_CACHE_HOME}/httpdirfs\")\n\
--cacert Certificate authority for the server\n\
--dl-seg-size Set cache download segment size, in MB (default: 8)\n\
Note: this setting is ignored if previously\n\
cached data is found for the requested file.\n\
@ -360,12 +375,14 @@ HTTPDirFS options:\n\
to 1TB in size using the default segment size.\n\
--max-conns Set maximum number of network connections that\n\
libcurl is allowed to make. (default: 10)\n\
--refresh-timeout The directories are refreshed after the specified\n\
time, in seconds (default: 3600)\n\
--retry-wait Set delay in seconds before retrying an HTTP request\n\
after encountering an error. (default: 5)\n\
--user-agent Set user agent string (default: \"HTTPDirFS\")\n\
--no-range-check Disable the build-in check for the server's support\n\
--no-range-check Disable the built-in check for the server's support\n\
for HTTP range requests\n\
--insecure_tls Disable licurl TLS certificate verification by\n\
--insecure-tls Disable licurl TLS certificate verification by\n\
setting CURLOPT_SSL_VERIFYHOST to 0\n\
--single-file-mode Single file mode - rather than mounting a whole\n\
directory, present a single file inside a virtual\n\

28
src/memcache.c Normal file
View File

@ -0,0 +1,28 @@
#include "memcache.h"
#include "log.h"
#include "util.h"
#include <stdlib.h>
#include <string.h>
size_t write_memory_callback(void *recv_data, size_t size, size_t nmemb,
void *userp)
{
size_t recv_size = size * nmemb;
TransferStruct *ts = (TransferStruct *) userp;
ts->data = realloc(ts->data, ts->curr_size + recv_size + 1);
if (!ts->data) {
/*
* out of memory!
*/
lprintf(fatal, "realloc failure!\n");
}
memmove(&ts->data[ts->curr_size], recv_data, recv_size);
ts->curr_size += recv_size;
ts->data[ts->curr_size] = '\0';
return recv_size;
}

35
src/memcache.h Normal file
View File

@ -0,0 +1,35 @@
#ifndef memcache_H
#define memcache_H
#include "link.h"
/**
* \brief specify the type of data transfer
*/
typedef enum {
FILESTAT = 's',
DATA = 'd'
} TransferType;
/**
* \brief For storing transfer data and metadata
*/
struct TransferStruct {
/** \brief The array to store the data */
char *data;
/** \brief The current size of the array */
size_t curr_size;
/** \brief The type of transfer being done */
TransferType type;
/** \brief Whether transfer is in progress */
volatile int transferring;
/** \brief The link associated with the transfer */
Link *link;
};
/**
* \brief Callback function for file transfer
*/
size_t write_memory_callback(void *contents, size_t size, size_t nmemb,
void *userp);
#endif

View File

@ -1,8 +1,8 @@
#include "network.h"
#include "cache.h"
#include "config.h"
#include "log.h"
#include "memcache.h"
#include "util.h"
#include <openssl/crypto.h>
@ -86,7 +86,7 @@ failed!\n", i);
* https://curl.haxx.se/libcurl/c/threaded-shared-conn.html
*/
static void
curl_callback_lock(CURL * handle, curl_lock_data data,
curl_callback_lock(CURL *handle, curl_lock_data data,
curl_lock_access access, void *userptr)
{
(void) access; /* unused */
@ -97,7 +97,7 @@ curl_callback_lock(CURL * handle, curl_lock_data data,
}
static void
curl_callback_unlock(CURL * handle, curl_lock_data data, void *userptr)
curl_callback_unlock(CURL *handle, curl_lock_data data, void *userptr)
{
(void) userptr; /* unused */
(void) handle; /* unused */
@ -111,25 +111,35 @@ curl_callback_unlock(CURL * handle, curl_lock_data data, void *userptr)
* https://curl.haxx.se/libcurl/c/10-at-a-time.html
*/
static void
curl_process_msgs(CURLMsg * curl_msg, int n_running_curl, int n_mesgs)
curl_process_msgs(CURLMsg *curl_msg, int n_running_curl, int n_mesgs)
{
(void) n_running_curl;
(void) n_mesgs;
static volatile int slept = 0;
if (curl_msg->msg == CURLMSG_DONE) {
TransferStruct *transfer;
TransferStruct *ts;
CURL *curl = curl_msg->easy_handle;
curl_easy_getinfo(curl_msg->easy_handle, CURLINFO_PRIVATE,
&transfer);
transfer->transferring = 0;
CURLcode ret =
curl_easy_getinfo(curl_msg->easy_handle, CURLINFO_PRIVATE,
&ts);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
ts->transferring = 0;
char *url = NULL;
curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, &url);
ret = curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, &url);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
/*
* Wait for 5 seconds if we get HTTP 429
*/
long http_resp = 0;
curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &http_resp);
ret = curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &http_resp);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
if (HTTP_temp_failure(http_resp)) {
if (!slept) {
lprintf(warning,
@ -146,8 +156,8 @@ curl_process_msgs(CURLMsg * curl_msg, int n_running_curl, int n_mesgs)
/*
* Transfer successful, set the file size
*/
if (transfer->type == FILESTAT) {
Link_set_file_stat(transfer->link, curl);
if (ts->type == FILESTAT) {
Link_set_file_stat(ts->link, curl);
}
} else {
lprintf(error, "%d - %s <%s>\n",
@ -158,9 +168,9 @@ curl_process_msgs(CURLMsg * curl_msg, int n_running_curl, int n_mesgs)
/*
* clean up the handle, if we are querying the file size
*/
if (transfer->type == FILESTAT) {
if (ts->type == FILESTAT) {
curl_easy_cleanup(curl);
FREE(transfer);
FREE(ts);
}
} else {
lprintf(warning, "curl_msg->msg: %d\n", curl_msg->msg);
@ -182,56 +192,13 @@ int curl_multi_perform_once(void)
*/
int n_running_curl;
CURLMcode mc = curl_multi_perform(curl_multi, &n_running_curl);
if (mc > 0) {
if (mc) {
lprintf(error, "%s\n", curl_multi_strerror(mc));
}
fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd = -1;
long curl_timeo = -1;
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
/*
* set a default timeout for select()
*/
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
curl_multi_timeout(curl_multi, &curl_timeo);
/*
* We effectively cap timeout to 1 sec
*/
if (curl_timeo >= 0) {
timeout.tv_sec = curl_timeo / 1000;
if (timeout.tv_sec > 1) {
timeout.tv_sec = 1;
} else {
timeout.tv_usec = (curl_timeo % 1000) * 1000;
}
}
/*
* get file descriptors from the transfers
*/
mc = curl_multi_fdset(curl_multi, &fdread, &fdwrite, &fdexcep, &maxfd);
if (mc > 0) {
lprintf(error, "%s.\n", curl_multi_strerror(mc));
}
if (maxfd == -1) {
usleep(100 * 1000);
} else {
if (select(maxfd + 1, &fdread, &fdwrite, &fdexcep, &timeout) < 0) {
lprintf(error, "select(): %s.\n", strerror(errno));
}
mc = curl_multi_poll(curl_multi, NULL, 0, 100, NULL);
if (mc) {
lprintf(error, "%s\n", curl_multi_strerror(mc));
}
/*
@ -305,73 +272,46 @@ void NetworkSystem_init(void)
crypto_lock_init();
}
void transfer_blocking(CURL * curl)
void transfer_blocking(CURL *curl)
{
/*
* We don't need to malloc here, as the transfer is finished before
* the variable gets popped from the stack
*/
volatile TransferStruct transfer;
transfer.type = DATA;
transfer.transferring = 1;
curl_easy_setopt(curl, CURLOPT_PRIVATE, &transfer);
TransferStruct *ts;
CURLcode ret = curl_easy_getinfo(curl, CURLINFO_PRIVATE, &ts);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
lprintf(network_lock_debug,
"thread %x: locking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_LOCK(&transfer_lock);
CURLMcode res = curl_multi_add_handle(curl_multi, curl);
lprintf(network_lock_debug,
"thread %x: unlocking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&transfer_lock);
if (res > 0) {
lprintf(error, "%d, %s\n", res, curl_multi_strerror(res));
}
while (transfer.transferring) {
lprintf(network_lock_debug,
"thread %x: unlocking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&transfer_lock);
while (ts->transferring) {
curl_multi_perform_once();
}
}
void transfer_nonblocking(CURL * curl)
void transfer_nonblocking(CURL *curl)
{
lprintf(network_lock_debug,
"thread %x: locking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_LOCK(&transfer_lock);
CURLMcode res = curl_multi_add_handle(curl_multi, curl);
if (res > 0) {
lprintf(error, "%s\n", curl_multi_strerror(res));
}
lprintf(network_lock_debug,
"thread %x: unlocking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&transfer_lock);
if (res > 0) {
lprintf(error, "%s\n", curl_multi_strerror(res));
}
}
size_t
write_memory_callback(void *contents, size_t size, size_t nmemb,
void *userp)
{
size_t realsize = size * nmemb;
DataStruct *mem = (DataStruct *) userp;
mem->data = realloc(mem->data, mem->size + realsize + 1);
if (!mem->data) {
/*
* out of memory!
*/
lprintf(fatal, "realloc failure!\n");
}
memmove(&mem->data[mem->size], contents, realsize);
mem->size += realsize;
mem->data[mem->size] = 0;
return realsize;
}
int HTTP_temp_failure(HTTPResponseCode http_resp)

View File

@ -6,8 +6,12 @@
* \brief network related functions
*/
typedef struct TransferStruct TransferStruct;
#include "link.h"
#include <curl/curl.h>
/** \brief HTTP response codes */
typedef enum {
HTTP_OK = 200,
@ -28,15 +32,10 @@ int curl_multi_perform_once(void);
void NetworkSystem_init(void);
/** \brief blocking file transfer */
void transfer_blocking(CURL * curl);
void transfer_blocking(CURL *curl);
/** \brief non blocking file transfer */
void transfer_nonblocking(CURL * curl);
/** \brief callback function for file transfer */
size_t
write_memory_callback(void *contents, size_t size, size_t nmemb,
void *userp);
void transfer_nonblocking(CURL *curl);
/**
* \brief check if a HTTP response code corresponds to a temporary failure

View File

@ -1,10 +1,10 @@
#include "sonic.h"
#include "config.h"
#include "util.h"
#include "link.h"
#include "network.h"
#include "log.h"
#include "link.h"
#include "memcache.h"
#include "util.h"
#include <expat.h>
@ -24,7 +24,7 @@ typedef struct {
static SonicConfigStruct SONIC_CONFIG;
/**
* \brief initalise Sonic configuration struct
* \brief initialise Sonic configuration struct
*/
void
sonic_config_init(const char *server, const char *username,
@ -191,22 +191,22 @@ XML_parser_general(void *data, const char *elem, const char **attr)
*/
link->type = LINK_DIR;
} else if (!strcmp(elem, "artist")
&& linktbl->links[0]->sonic_depth != 3) {
&& linktbl->links[0]->sonic.depth != 3) {
/*
* We want to skip the first "artist" element in the album table
*/
link = CALLOC(1, sizeof(Link));
link->type = LINK_DIR;
} else if (!strcmp(elem, "album")
&& linktbl->links[0]->sonic_depth == 3) {
&& linktbl->links[0]->sonic.depth == 3) {
link = CALLOC(1, sizeof(Link));
link->type = LINK_DIR;
/*
* The new table should be a level 4 song table
*/
link->sonic_depth = 4;
link->sonic.depth = 4;
} else if (!strcmp(elem, "song")
&& linktbl->links[0]->sonic_depth == 4) {
&& linktbl->links[0]->sonic.depth == 4) {
link = CALLOC(1, sizeof(Link));
link->type = LINK_FILE;
} else {
@ -224,8 +224,8 @@ XML_parser_general(void *data, const char *elem, const char **attr)
char *suffix = "";
for (int i = 0; attr[i]; i += 2) {
if (!strcmp("id", attr[i])) {
link->sonic_id = CALLOC(MAX_FILENAME_LEN + 1, sizeof(char));
strncpy(link->sonic_id, attr[i + 1], MAX_FILENAME_LEN);
link->sonic.id = CALLOC(MAX_FILENAME_LEN + 1, sizeof(char));
strncpy(link->sonic.id, attr[i + 1], MAX_FILENAME_LEN);
id_set = 1;
continue;
}
@ -252,7 +252,7 @@ XML_parser_general(void *data, const char *elem, const char **attr)
*/
if (!linkname_set) {
if (!strcmp("title", attr[i])
|| !strcmp("name", attr[i])) {
|| !strcmp("name", attr[i])) {
strncpy(link->linkname, attr[i + 1], MAX_FILENAME_LEN);
linkname_set = 1;
continue;
@ -296,7 +296,7 @@ XML_parser_general(void *data, const char *elem, const char **attr)
}
if (!linkname_set && strnlen(title, MAX_PATH_LEN) > 0 &&
strnlen(suffix, MAX_PATH_LEN) > 0) {
strnlen(suffix, MAX_PATH_LEN) > 0) {
snprintf(link->linkname, MAX_FILENAME_LEN, "%02d - %s.%s",
track, title, suffix);
linkname_set = 1;
@ -311,7 +311,7 @@ XML_parser_general(void *data, const char *elem, const char **attr)
}
if (link->type == LINK_FILE) {
char *url = sonic_stream_link(link->sonic_id);
char *url = sonic_stream_link(link->sonic.id);
strncpy(link->f_url, url, MAX_PATH_LEN);
FREE(url);
}
@ -319,21 +319,45 @@ XML_parser_general(void *data, const char *elem, const char **attr)
LinkTable_add(linktbl, link);
}
static void sanitise_LinkTable(LinkTable *linktbl)
{
for (int i = 0; i < linktbl->num; i++) {
if (!strcmp(linktbl->links[i]->linkname, ".")) {
/* Note the super long sanitised name to avoid collision */
strcpy(linktbl->links[i]->linkname, "__DOT__");
}
if (!strcmp(linktbl->links[i]->linkname, "/")) {
/* Ditto */
strcpy(linktbl->links[i]->linkname, "__FORWARD-SLASH__");
}
for (size_t j = 0; j < strlen(linktbl->links[i]->linkname); j++) {
if (linktbl->links[i]->linkname[j] == '/') {
linktbl->links[i]->linkname[j] = '-';
}
}
if (linktbl->links[i]->next_table != NULL) {
sanitise_LinkTable(linktbl->links[i]->next_table);
}
}
}
/**
* \brief parse a XML string in order to fill in the LinkTable
*/
static LinkTable *sonic_url_to_LinkTable(const char *url,
XML_StartElementHandler handler,
int depth)
XML_StartElementHandler handler, int depth)
{
LinkTable *linktbl = LinkTable_alloc(url);
linktbl->links[0]->sonic_depth = depth;
linktbl->links[0]->sonic.depth = depth;
/*
* start downloading the base URL
*/
DataStruct xml = Link_to_DataStruct(linktbl->links[0]);
if (xml.size == 0) {
TransferStruct xml = Link_download_full(linktbl->links[0]);
if (xml.curr_size == 0) {
LinkTable_free(linktbl);
return NULL;
}
@ -343,7 +367,7 @@ static LinkTable *sonic_url_to_LinkTable(const char *url,
XML_SetStartElementHandler(parser, handler);
if (XML_Parse(parser, xml.data, xml.size, 1) == XML_STATUS_ERROR) {
if (XML_Parse(parser, xml.data, xml.curr_size, 1) == XML_STATUS_ERROR) {
lprintf(error,
"Parse error at line %lu: %s\n",
XML_GetCurrentLineNumber(parser),
@ -356,6 +380,8 @@ static LinkTable *sonic_url_to_LinkTable(const char *url,
LinkTable_print(linktbl);
sanitise_LinkTable(linktbl);
return linktbl;
}
@ -429,7 +455,7 @@ XML_parser_id3_root(void *data, const char *elem, const char **attr)
/*
* The new table should be a level 3 album table
*/
link->sonic_depth = 3;
link->sonic.depth = 3;
for (int i = 0; attr[i]; i += 2) {
if (!strcmp("name", attr[i])) {
strncpy(link->linkname, attr[i + 1], MAX_FILENAME_LEN);
@ -438,9 +464,9 @@ XML_parser_id3_root(void *data, const char *elem, const char **attr)
}
if (!strcmp("id", attr[i])) {
link->sonic_id =
link->sonic.id =
CALLOC(MAX_FILENAME_LEN + 1, sizeof(char));
strncpy(link->sonic_id, attr[i + 1], MAX_FILENAME_LEN);
strncpy(link->sonic.id, attr[i + 1], MAX_FILENAME_LEN);
id_set = 1;
continue;
}
@ -467,25 +493,25 @@ LinkTable *sonic_LinkTable_new_id3(int depth, const char *id)
char *url;
LinkTable *linktbl = ROOT_LINK_TBL;
switch (depth) {
/*
* Root table
*/
/*
* Root table
*/
case 0:
url = sonic_gen_url_first_part("getArtists");
linktbl = sonic_url_to_LinkTable(url, XML_parser_id3_root, 0);
FREE(url);
break;
/*
* Album table - get all the albums of an artist
*/
/*
* Album table - get all the albums of an artist
*/
case 3:
url = sonic_getArtist_link(id);
linktbl = sonic_url_to_LinkTable(url, XML_parser_general, depth);
FREE(url);
break;
/*
* Song table - get all the songs of an album
*/
/*
* Song table - get all the songs of an album
*/
case 4:
url = sonic_getAlbum_link(id);
linktbl = sonic_url_to_LinkTable(url, XML_parser_general, depth);
@ -498,5 +524,6 @@ LinkTable *sonic_LinkTable_new_id3(int depth, const char *id)
lprintf(fatal, "case %d.\n", depth);
break;
}
return linktbl;
}

View File

@ -5,6 +5,25 @@
* \brief Sonic related function
*/
typedef struct {
/**
* \brief Sonic id field
* \details This is used to store the following:
* - Arist ID
* - Album ID
* - Song ID
* - Sub-directory ID (in the XML response, this is the ID on the "child"
* element)
*/
char *id;
/**
* \brief Sonic directory depth
* \details This is used exclusively in ID3 mode to store the depth of the
* current directory.
*/
int depth;
} Sonic;
#include "link.h"
/**

View File

@ -1,15 +1,17 @@
#include "config.h"
#include "util.h"
#include "config.h"
#include "log.h"
#include <openssl/md5.h>
#include <uuid/uuid.h>
#include <errno.h>
#include <execinfo.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>
/**
* \brief Backtrace buffer size
@ -31,7 +33,7 @@ char *path_append(const char *path, const char *filename)
{
int needs_separator = 0;
if ((path[strnlen(path, MAX_PATH_LEN) - 1] != '/')
&& (filename[0] != '/')) {
&& (filename[0] != '/')) {
needs_separator = 1;
}
@ -52,25 +54,23 @@ int64_t round_div(int64_t a, int64_t b)
return (a + (b / 2)) / b;
}
void PTHREAD_MUTEX_UNLOCK(pthread_mutex_t * x)
void PTHREAD_MUTEX_UNLOCK(pthread_mutex_t *x)
{
int i;
i = pthread_mutex_unlock(x);
if (i) {
lprintf(fatal,
"thread %x: pthread_mutex_unlock() failed, %d, %s\n",
pthread_self(), i, strerror(i));
"thread %x: %d, %s\n", pthread_self(), i, strerror(i));
}
}
void PTHREAD_MUTEX_LOCK(pthread_mutex_t * x)
void PTHREAD_MUTEX_LOCK(pthread_mutex_t *x)
{
int i;
i = pthread_mutex_lock(x);
if (i) {
lprintf(fatal,
"thread %x: pthread_mutex_lock() failed, %d, %s\n",
pthread_self(), i, strerror(i));
"thread %x: %d, %s\n", pthread_self(), i, strerror(i));
}
}
@ -88,7 +88,7 @@ void exit_failure(void)
exit(EXIT_FAILURE);
}
void erase_string(FILE * file, size_t max_len, char *s)
void erase_string(FILE *file, size_t max_len, char *s)
{
size_t l = strnlen(s, max_len);
for (size_t k = 0; k < l; k++) {
@ -142,9 +142,8 @@ void FREE(void *ptr)
{
if (ptr) {
free(ptr);
ptr = NULL;
} else {
lprintf(fatal, "attempted to double free a pointer!\n");
lprintf(fatal, "attempted to free NULL ptr!\n");
}
}

View File

@ -25,12 +25,12 @@ int64_t round_div(int64_t a, int64_t b);
/**
* \brief wrapper for pthread_mutex_lock(), with error handling
*/
void PTHREAD_MUTEX_LOCK(pthread_mutex_t * x);
void PTHREAD_MUTEX_LOCK(pthread_mutex_t *x);
/**
* \brief wrapper for pthread_mutex_unlock(), with error handling
*/
void PTHREAD_MUTEX_UNLOCK(pthread_mutex_t * x);
void PTHREAD_MUTEX_UNLOCK(pthread_mutex_t *x);
/**
* \brief wrapper for exit(EXIT_FAILURE), with error handling
@ -40,7 +40,7 @@ void exit_failure(void);
/**
* \brief erase a string from the terminal
*/
void erase_string(FILE * file, size_t max_len, char *s);
void erase_string(FILE *file, size_t max_len, char *s);
/**
* \brief generate the salt for authentication string