Compare commits

...

285 Commits

Author SHA1 Message Date
Fufu Fang 07475660f1
updated LinkTable invalidation 2024-05-11 23:15:32 +01:00
Fufu Fang a5a53442b2
updated the description for refreshing directories 2024-05-11 17:28:23 +01:00
Fufu Fang 9e383ad7a3
Moved linktable freshness check around
Fixed https://github.com/fangfufu/httpdirfs/issues/141
2024-05-11 01:53:30 +01:00
Fufu Fang 127f2194d0
added more comments 2024-05-07 01:44:01 +01:00
Fufu Fang 4fb95ee5a0
attempt to fix codeql 2024-05-06 00:47:22 +01:00
Fufu Fang 720db5aafa
fixed cache system for percentage encoded file in single-file mode 2024-05-06 00:12:03 +01:00
Fufu Fang 28293b5ccd
fixed erroneous error check 2024-05-05 03:14:51 +01:00
Fufu Fang 1a20318654
added more debug statements 2024-05-05 02:55:10 +01:00
Fufu Fang 9a7eabd170
modified debug message 2024-05-05 02:04:31 +01:00
Fufu Fang 01fd2e9559
changed the way debug level works 2024-05-05 02:00:46 +01:00
Fufu Fang be666d72e9
removed semi-colon at the end of a macro 2024-05-05 00:32:00 +01:00
Fufu Fang 1fa3830dec
run through the formatter 2024-05-03 07:39:14 +01:00
Fufu Fang 8aa7c570c8
added a todo note 2024-05-03 07:37:44 +01:00
Fufu Fang 389a657170
improved debug message 2024-05-03 07:33:41 +01:00
Fufu Fang 257bb22e80
Merge branch 'master' into debug 2024-05-03 07:20:08 +01:00
Fufu Fang a299819b7d
fixed a memory leak, improved error handling in cache system 2024-05-03 07:19:24 +01:00
Fufu Fang 3e7d9f0294
start labelling what might be wrong. 2024-05-03 06:44:59 +01:00
Fufu Fang 63455c54cc
initial commit to the debug branch 2024-05-03 06:44:33 +01:00
Fufu Fang d4c7d8c92a
added more debug message 2024-05-03 06:44:01 +01:00
Fufu Fang dfc83d0e1c
improved debug message 2024-05-03 06:24:50 +01:00
Fufu Fang 96a7c248d3
improved debug message 2024-05-03 05:59:09 +01:00
Fufu Fang f92fe4232a
attempt to fix codeQL 2024-05-02 07:07:58 +01:00
Fufu Fang 91351689f1
LinkTable now saves the refresh time 2024-05-02 06:59:22 +01:00
Fufu Fang 1a3f36a92c
Corrected an implementation error and added more comments 2024-05-02 04:45:34 +01:00
Fufu Fang d6d4af0c8c
Update README.md
Fix https://github.com/fangfufu/httpdirfs/issues/136
2024-04-20 01:30:52 +01:00
Fufu Fang f48ee93931
Update README.md 2024-02-01 09:58:05 +00:00
Fufu Fang 983b1edfbd
Updated README 2024-02-01 06:28:36 +00:00
Fufu Fang 707d9b9253
Configure online code scanning tools
- Added .deepsource.toml for Deep Source
- Added configuration for GitHub CodeQL
2024-02-01 02:53:26 +00:00
Fufu Fang 81aac8bb57
fixed spelling, ran through the formatter 2024-01-13 12:31:47 +00:00
Mattias Runge-Broberg 35a213942c
Fix for single file mode not working
- Fix for not sending ranges which exceed the content-length which will result
in an error.
- Fix for byte range being set to 1 byte too large, it should be the end index,
not the size as described in
https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
2024-01-13 12:30:52 +00:00
Fufu Fang 595c6d275e
Remove spurious code
Remote spurious code flagged by 8451da6ac7,
which was introduced by e76b079fe6
Closes https://github.com/fangfufu/httpdirfs/issues/124
2023-10-03 23:10:24 +01:00
chrysn bd33966337 Allow leading `./` segments in links 2023-10-02 23:44:18 +01:00
Jonathan Kamens 29c3eb8f67 Convert build process to use autotools (autoconf, automake, etc.)
This commit converts the build process from a hand-written Makefile
that works on Linux, FreeBSD, and macOS, to an automatically generated
Makefile managed by the autotools toolset.

This incldues:

* Add the compile, config.guess, config.sub, depcomp, install-sh, and
  missing helper scripts that autotools requires to be shipped with
  the package in order for configure to work.
* Rename Makefile to Makefile.am and restructure it for compatibility
  with autotools and specifically with the stuff in our configure
  script.
* Create the configure.ac source file which is turned into the
  configure script.
* Rename Doxyfile to Doxyfile.in so that the source directories can be
  substituted into it at configure time.
* Tweak .gitignore to ignore temporary and output files related to
  autotools.
* Generate Makefile.in, aclocal.m4, and configure using `autoreconf`
  and include them as checked-in source files.

While I can't fully document how autotools works here the basic
workflow is that when you need to make changes to the build, you
update Makefile.am and/or configure.ac as needed, run `autoreconf`,
and commit the changes you made as well as any resulting changes to
Makefile.in, aclocal.m4, and configure. Makefile should _not_ be
committed into the source tree; it should always be generated using
configure on the system where the build is being run.
2023-09-29 23:45:47 +01:00
Jonathan Kamens ed93a133df Fix minor logic bug and code smell in make_link_relative
Don't assume that the reason why we didn't find enough slashes in a
URL is because the user didn't specify the slash at the end of the
host name, unless we did find the first two slashes.

Add some curly braces around an if block to make it clear to people
and the compiler which statement an `else` applies to. The logic was
correct before but the indentation was wrong, making it especially
confusing.
2023-09-29 23:45:47 +01:00
Jonathan Kamens 7bcd43068d Fix broken curl HTTP response code check
The check for the HTTP response code from the curl library was written
incorrectly and guaranteed to always fail. I've fixed the logic to
reflect what I believe was intended.
2023-09-29 23:45:47 +01:00
Jonathan Kamens ab49ca76b6 Add missing return value check for fread call 2023-09-29 23:45:47 +01:00
Jonathan Kamens 8451da6ac7 Comment out small block of code that doesn't do anything
There's a small block of code that calls strnlen on a string, saves
the esult in a variable, conditionally decrements the variable, and
then does nothing with it, making the entire block of code a no-op.

I don't want to just remove it entirely since it's possible that there
was intended to be some sort of check here that was inadvertently
omitted. So to make the compiler stop complaining I've commented out
the code, but I've left a comment above it explaining why it was
commented out and pointing out that maybe something different needs to
be done with it.
2023-09-29 23:45:47 +01:00
Jonathan Kamens e253b4a9ee Eliminate some compiler warnings 2023-09-29 23:45:47 +01:00
Jonathan Kamens 8f0ef158c0 Remove spurious arguments to print_version() 2023-09-29 23:45:47 +01:00
Jonathan Kamens c532661d29 Add missing error-checking for return value of fread
Several calls to fread were missing checks to ensure that the expected
amount of data was read.
2023-09-29 23:45:47 +01:00
Jonathan Kamens 7363adaf12 Handle sites that put unencoded characters in URLs that curl dislikes
Some sites put unencoded characters in their href attributes that
really should be encoded, most notably spaces. Curl won't accept a URL
with a space in it, and perhaps other such characters as well. Address
this by properly encoding characters in URLs before feeding them to
Curl.
2023-09-29 12:47:55 +01:00
Jonathan Kamens e94b5441f3 Add a few more debug messages to help trace program execution 2023-09-29 12:47:55 +01:00
Jonathan Kamens 3beccd2c2d Enabling debugging on command line should enable debug logging
I believe an appropriate expectation is that if the user enables
debugging with a command-line flag, then that should also enable
messagse designated as debug messages in the code to be printed.
2023-09-29 12:47:55 +01:00
Jonathan Kamens 4d323b846f Do the right thing with sites that use absolute links
On some sites, the link to each subfolder is an absolute link rather
than a relative one. To accommodate this, convert the links from
absolute to relative before storing them in the link table.
2023-09-29 12:47:55 +01:00
Jonathan Kamens 41cb4b80bc Do the right thing with sites that require the final slash
Some web sites will return 404 if you fetch a directory without the
final slash. For example, https://archive.mozilla.org/pub/ works,
https://archive.mozilla.org/pub does not. We need to do two things to
accommodate this:

* When processing the root URL of the filesystem, instead of stripping
  off the final slash, just set the offset to ignore it.
* In the link structure, store the actual URL tail of the link
  separately from its name, final slash and all if there is one, and
  append that instead of the name when constructing the URL for curl.
2023-09-29 12:47:55 +01:00
Fufu Fang 1e80844831 ran the code through formatter 2023-07-26 07:48:33 +08:00
Fufu Fang 6d8db94458 minor formatting changes for PR #114 2023-07-26 07:48:22 +08:00
Fufu Fang 282605b0ac fix: changed deprecated libcurl call 2023-07-25 14:57:08 +08:00
Mike Morrison a309994b9e
Add setting to refresh directory contents (#114)
Refresh a directory's contents when fs_readdir is called
if it has been more than the number of seconds specified by
--refresh_timeout since the directory was last indexed.
2023-03-31 13:26:15 +01:00
Kian-Meng Ang 9a7016f29b
Fix typos (#117)
Found via `codespell`
2023-03-28 05:00:07 +01:00
Fufu Fang 8479feb2f6
Bumped version number to 1.2.5 for Debian release 2023-02-24 19:47:23 +00:00
Fufu Fang fe45afc6a1
Remove the usage of UBSAN
Address issue #113. Use of UBSAN in runtime could introduce
vulnerabilities.

Original bug report:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1031744

Reference:
https://www.openwall.com/lists/oss-security/2016/02/17/9
2023-02-23 01:44:18 +00:00
Jérôme Charaoui e9f60d5221
fix typo 2023-01-28 12:02:31 -05:00
Jérôme Charaoui 74fac1dce0
bump VERSION in Makefile 2023-01-28 12:01:06 -05:00
Fufu Fang 9b72f97bcf
Update README.md 2023-01-14 00:04:12 +00:00
Fufu Fang d91bb2b278
Update CHANGELOG.md 2023-01-11 23:56:19 +00:00
Fufu Fang f26a5bce25
Update CHANGELOG.md 2023-01-11 23:55:20 +00:00
Fufu Fang e6b5688e45
Modified Funkwhale sanitiser scheme 2022-11-06 23:45:13 +00:00
Fufu Fang 3acc093cdd
Merge pull request #106 from rdelaage/funkwhale_ioerror
Fix IO error with funkwhale subsonic API
2022-11-06 23:39:00 +00:00
Fufu Fang bb3b652135
Merge pull request #109 from nwf-msr/master
Add --cacert and --proxy-cacert
2022-11-02 08:19:26 +00:00
Nathaniel Wesley Filardo 12abb7d8ad Add --cacert and --proxy-cacert
Fixes https://github.com/fangfufu/httpdirfs/issues/108
2022-11-01 02:13:27 +00:00
Nathaniel Wesley Filardo ff5f566dd9 Link_download_full: don't FREE(NULL)
It's entirely possible that `ts.data` is `NULL` on an error path, so
handing it to `FREE()`, which bails on a `NULL` argument, is not ideal.
Just pass it to `free()` instead, which is required to no-op if given
`NULL`.
2022-11-01 01:59:03 +00:00
Nathaniel Wesley Filardo 833cbf9d67 Correct error message in FREE().
`FREE()` checks for a `NULL` pointer, but generally httpdirfs does not
`NULL` out pointers it attempts to `FREE()` (or `free()`).  As such, the
error message is misleading; make it less so in a trivial way.

Possibly a better, more invasive, change would be for `FREE()` to take a
`void** pp`, check that `*p != NULL`, `free(*p)`, and then `*p = NULL;`.
Were that done, then there would be some plausibility to the current
diagnostic message.
2022-11-01 01:59:03 +00:00
Romain de Laage abef0c9406
Fix IO error with funkwhale subsonic API 2022-09-23 07:49:36 +02:00
Fufu Fang 61d3ae4166
Merge pull request #104 from nwf-msr/202206-small-fixes
Two small patches
2022-08-12 00:49:03 +01:00
Nathaniel Wesley Filardo 72d15ab6c7 fs_open: return EROFS for non-RO opens
The use of EACCES leads to slightly confusing error messages in
downstream consumers, so prefer EROFS to better articulate what's
actually happening.

While here, use O_RDWR to mask the open flags while testing for
non-RO access.  This is at least encouraged by POSIX with their
suggestion that "O_RDONLY | O_WRONLY == O_RDWR".
2022-06-28 15:00:48 +01:00
Nathaniel Wesley Filardo ffb2658abb getopt_long returns an int, not a char
On platforms with an unsigned char, such as Arm, this results in
always taking error paths around initialization.

Fixes https://github.com/fangfufu/httpdirfs/issues/103
2022-06-28 14:45:31 +01:00
Jérôme Charaoui d1a10d489c add --name option to help2man
This resolves a lintian warning in Debian packaging
(manpage-has-useless-whatis-entry).
2022-04-24 00:27:12 -04:00
Fufu Fang 3b25cf31ef
Merge pull request #101 from moschlar/patch-1
Fix --insecure-tls in help and README
2022-04-23 02:49:50 +01:00
Fufu Fang d2207e7a4e
fixed --version switch 2022-04-23 02:49:16 +01:00
Jérôme Charaoui 66776261ca Remove generated manpage from repo
Packages generate it on the fly.
2022-04-22 12:32:47 -04:00
Moritz Schlarb a6f453c6a8
Update README.md 2022-04-04 15:38:38 +02:00
Moritz Schlarb 4d45525c64
Update main.c 2022-04-04 15:37:36 +02:00
Fufu Fang 40c750fac9 moved the location of error string 2021-09-04 13:37:45 +01:00
Fufu Fang 67edcc906f Clean up for the master branch 2021-09-04 12:41:33 +01:00
Fufu Fang cbe8c83195 stable version for master 2021-09-04 03:15:26 +01:00
Fufu Fang ebcfb0a79e periodic backup 2021-09-04 03:00:25 +01:00
Fufu Fang 5d539c30b1 started writing the ramcache 2021-09-04 01:28:01 +01:00
Fufu Fang 939e287c87 adjusted includes 2021-09-03 21:39:31 +01:00
Fufu Fang 6819ad09e4 removed unnecessary includes 2021-09-03 21:23:52 +01:00
Fufu Fang 7c6433f0cd more refactoring 2021-09-03 17:00:32 +01:00
Fufu Fang 1efe5932cf more refactoring 2021-09-03 16:58:08 +01:00
Fufu Fang ee32ddebc9 simplified network code 2021-09-03 16:36:50 +01:00
Fufu Fang dd8d887f94 more refactoring 2021-09-03 16:29:00 +01:00
Fufu Fang d403fa339b minor refactoring 2021-09-03 15:41:22 +01:00
Fufu Fang cd6bb5bee8 more refactoring 2021-09-03 14:56:11 +01:00
Fufu Fang bc88a681e3 check return for curl_easy_setopt, also new libcurl debug level 2021-09-03 12:57:52 +01:00
Fufu Fang 08eb04fb0e refactoring - now check return code from curl_easy_getinfo 2021-09-03 12:47:48 +01:00
Fufu Fang c64a139b46 refactoring transfer_blocking 2021-09-03 12:40:35 +01:00
Fufu Fang 177b738522 removed ts_ptr from Link 2021-09-02 16:52:39 +01:00
Fufu Fang d7086c6ecf Now clear the link->cache_ptr after closing the cache 2021-09-02 16:24:55 +01:00
Fufu Fang b96ed88bec improved debug statements 2021-09-02 16:07:39 +01:00
Fufu Fang 2d42313e8f compiles, but not running properly 2021-09-02 15:36:53 +01:00
Fufu Fang 31f8509f42 moved the *sonic related fields into a separate struct 2021-09-01 21:29:13 +01:00
Fufu Fang e7f06285df improved Makefile, fixed potential memory leak at Data_create 2021-09-01 12:34:53 +01:00
Fufu Fang 86003d2b6a Meta_create() now calls fclose itself 2021-09-01 12:19:20 +01:00
Fufu Fang 464c8e4863 Merged transfer status struct and transfer data struct 2021-09-01 11:56:18 +01:00
Fufu Fang a76366c481 improved error handling in path_download 2021-09-01 11:03:27 +01:00
Fufu Fang 8f9935ee5d moved cache_opened to cache.h 2021-09-01 10:39:33 +01:00
Fufu Fang 95b86825ed Added minimum transfer size in TransferDataStruct 2021-09-01 03:53:19 +01:00
Fufu Fang 08c1eeba49 added initial debug statements 2021-08-31 21:30:24 +01:00
Fufu Fang 8d2f7558a1
Update README.md 2021-08-31 19:54:03 +01:00
Fufu Fang 7dbc1750e4
modified README 2021-08-31 19:47:13 +01:00
Fufu Fang af2c5a16c9
Update CHANGELOG.md 2021-08-31 19:41:51 +01:00
Fufu Fang 41c42b2769
Update CHANGELOG.md 2021-08-31 19:00:10 +01:00
Fufu Fang d5356a2e2e
Update CHANGELOG.md 2021-08-31 18:59:48 +01:00
Fufu Fang 72f737de22
updated README 2021-08-31 18:58:23 +01:00
Fufu Fang 53c7e77575
updated Makefile 2021-08-31 18:55:56 +01:00
Fufu Fang 3c7e79089b
changed error handling for empty file 2021-08-31 18:54:58 +01:00
Fufu Fang 5e87ac92b0
Change error handling in cache.c, Updated Changelog.md 2021-08-31 18:49:49 +01:00
Fufu Fang a459dc926f
updated README.md 2021-08-31 15:39:54 +01:00
Fufu Fang ff97740dac
updated Makefile for man page generation, updated man page 2021-08-31 15:39:13 +01:00
Fufu Fang 7f9bb6e21d
updated README.md 2021-08-31 15:22:32 +01:00
Fufu Fang 87bca45a17
I don't use kate as an editor anymore 2021-08-31 14:07:35 +01:00
Fufu Fang 26c523afd6
updated README 2021-08-31 14:06:16 +01:00
Fufu Fang 1e98b16602
Now prints version number at startup 2021-08-31 14:00:47 +01:00
Fufu Fang f42264d3c3
Added single file mode
Implemented feature request
https://github.com/fangfufu/httpdirfs/issues/86
2021-08-31 13:52:25 +01:00
Fufu Fang af45bcfa19
replaced strlen with strnlen 2021-08-31 12:31:02 +01:00
Fufu Fang a9988c794a
added man into Makefile phony target 2021-08-31 12:25:11 +01:00
Fufu Fang 07603c36ba
improved log out, fixed help message output, Makefile now generates man page 2021-08-31 12:23:43 +01:00
Fufu Fang 6713362a5f
improved error handling 2021-08-31 11:59:28 +01:00
Fufu Fang 04212c3d55
added double free detection 2021-08-31 11:54:42 +01:00
Fufu Fang e02042cade
improved logging 2021-08-31 11:50:59 +01:00
Fufu Fang 45d8cb8136
changed indentation style 2021-08-31 11:18:39 +01:00
Fufu Fang f791ceb308
shortened error log format, changed indentation style 2021-08-31 11:15:00 +01:00
Fufu Fang b03954482e
changed linktable offset 2021-08-31 10:37:56 +01:00
Fufu Fang 7b6277cb3d
now check for the invalid CONFIG.mode 2021-08-31 00:23:50 +01:00
Fufu Fang 81ed433182
changed some comments and indentation 2021-08-31 00:13:17 +01:00
Fufu Fang 7542db7077
changed default log level 2021-08-30 12:05:02 +01:00
Fufu Fang 82c87d5120
fixed some comments 2021-08-30 12:03:51 +01:00
Fufu Fang 510969a780
fixed deadlock 2021-08-30 11:55:04 +01:00
Fufu Fang c2f409dcc7
updated coding convention file 2021-08-30 11:25:17 +01:00
Fufu Fang 0f3cc61875
relabelled all log outputs 2021-08-30 11:24:32 +01:00
Fufu Fang 0219d7460a
only network.c needs to be cleaned up 2021-08-30 05:17:15 +01:00
Fufu Fang 2a4c61477a
clean up lprintf statements - we have link.c and network.c left. 2021-08-30 03:43:45 +01:00
Fufu Fang 7813487c50
improved error message, removed unnecessary locks 2021-08-30 02:50:03 +01:00
Fufu Fang 14c4b3b486
updated logging facility 2021-08-29 22:46:24 +01:00
Fufu Fang f37cdefa47
Various changes
- Rather than using a flag to indicate operating mode, now we use
a variable.
- Change log printing level names
- Change the return for Cache_exist
2021-08-29 14:07:22 +01:00
Fufu Fang de516f23ff
Merge branch 'master' into single-file 2021-08-29 11:24:18 +01:00
Fufu Fang 15d6da357e
Added .vscode back into .gitignore 2021-08-29 11:23:55 +01:00
Fufu Fang b895fcb318
added the single-file-mode in the configuration struct 2021-08-29 11:09:41 +01:00
Fufu Fang 05ebf76094
fixed LinkTable_uninitialised_fill() status output 2021-08-29 10:58:11 +01:00
Fufu Fang 7c83da7e32
removed .vscode folder from .gitignore 2021-08-29 10:56:36 +01:00
Fufu Fang 4bf5631714
Revert 60b885181a
It breaks the cache system completely.
2021-08-29 10:52:49 +01:00
Fufu Fang 8777cf90bc
send help message to stdout 2021-08-22 02:56:38 +01:00
Fufu Fang 878f120fc2
Temporarily increasing the debugging level on the master branch
This will be reverted when the README is complete.
2021-08-22 02:28:00 +01:00
Fufu Fang 6d5267089f
improved debug messages 2021-08-22 02:26:09 +01:00
Fufu Fang 67ec1ad7e5
Separated out config.c and config.h 2021-08-22 00:51:37 +01:00
Fufu Fang 89df992053
fixed errorneous error handling 2021-08-21 02:40:20 +01:00
MecryWork 33bbd21e9f
Fix memory overflow
Co-authored-by: MecryWork
2021-08-12 13:28:37 +01:00
MecryWork 60b885181a
fix: Seg_exist function crashes when the second parameter is 0
Co-authored-by: liuchenghao
2021-08-09 10:36:09 +01:00
Fufu Fang 31617b146c
Updated the man page 2021-08-09 04:01:21 +01:00
Fufu Fang 5f86703f17
janatorial changes 2021-08-08 15:50:35 +01:00
Fufu Fang 9b23b69df2
Update Makefile 2021-08-08 14:32:36 +01:00
Fufu Fang 7e4ae034d8
updated CHANGELOG.md 2021-08-08 14:25:29 +01:00
Fufu Fang e76b079fe6
Fix issue #59
Stop duplicated link from showing for Apache server configured
with IconsAreLinks option.
2021-08-08 14:25:28 +01:00
Fufu Fang 8e6ff1a93d
replaced calloc with CALLOC wrapper function 2021-08-08 14:25:24 +01:00
Fufu Fang 65f91966d5
updated .gitignore 2021-08-05 00:38:08 +01:00
Fufu Fang 861481e6e1
Fixed issue #71
Now allows link which start with percentage encoding
2021-08-05 00:34:07 +01:00
Fufu Fang df94764dcb
Merge pull request #68 from MecryWork/master
fix: Failed to mount an empty file in the cache state
2021-07-27 09:44:05 +01:00
liuchenghao 2791f96603 fix: Failed to mount an empty file in the cache state 2021-07-27 15:55:19 +08:00
Fufu Fang fa586cd117
Update README.md 2021-06-05 12:32:20 +01:00
Fufu Fang 8481ab0c80
Merge pull request #66 from hiliev/macos-uninstall
Fix for uninstall on macOS
2021-06-05 11:52:46 +01:00
Hristo Iliev 92a73305f2 Uninstall for macOS 2021-06-05 11:07:46 +03:00
Fufu Fang daa7a7c4d0
Update README.md 2021-05-28 10:10:34 +01:00
Fufu Fang 907f73fa6e
Update Makefile 2021-05-27 22:17:41 +01:00
Fufu Fang 55ea241d90
Update CHANGELOG.md 2021-05-27 22:03:34 +01:00
Fufu Fang b4f5dd7273
Update CHANGELOG.md 2021-05-27 22:02:51 +01:00
Fufu Fang 3e4a438d75
Merge pull request #64 from hiliev/macos-install
Add install target for macOS
2021-05-27 21:01:08 +01:00
Hristo Iliev 49198d5125 Ensure target bin directory is created on macOS 2021-05-27 22:59:18 +03:00
Hristo Iliev a431dd6dce Merge branch 'fangfufu:master' into macos-install 2021-05-27 22:41:01 +03:00
Hristo Iliev 1df02b08e8 Add install target for macOS 2021-05-27 22:38:22 +03:00
Fufu Fang ef61c3c6da
Update README.md 2021-05-27 20:17:51 +01:00
Fufu Fang fbaf6b948a
Merge pull request #63 from hiliev/macos-patches
Add support for macOS
2021-05-27 20:15:52 +01:00
Hristo Iliev e553463dc4 Patches to build on macOS 2021-05-27 21:49:51 +03:00
Fufu Fang 0f7a97bcba
minor spacing issue in Makefile 2021-05-26 00:39:00 +01:00
Fufu Fang 465d3d48da
Update README.md 2021-04-02 13:04:52 +01:00
Fufu Fang 20aac32f22
Merge pull request #52 from cyberjunky/patch-1
Added dependencies for Ubuntu 18.04 LTS
2020-02-06 10:09:31 +00:00
Ron Klinkien a61f18eb86
Added dependencies for Ubuntu 18.04 LTS 2020-02-06 07:57:07 +01:00
Fufu Fang f0b958ac82
Merge pull request #50 from 0mp/patch-2
Update the website of the FreeBSD package
2019-11-23 16:23:59 +00:00
Mateusz Piotrowski 7dfac6abef
Update the website of the FreeBSD package 2019-11-23 16:55:49 +01:00
Fufu Fang c790a44c91
Merge pull request #49 from 0mp/patch-1
Update FreeBSD installation instructions
2019-11-12 11:01:36 +00:00
Mateusz Piotrowski 847e4eac82
Update FreeBSD installation instructions 2019-11-12 11:59:55 +01:00
Fufu Fang 6d144973b9
Update README.md and CHANGELOG.md 2019-11-01 13:02:51 +00:00
Fufu Fang 80c0b695a1
Merge pull request #48 from edenist/add-freebsd-support
Add freebsd support
2019-11-01 12:55:20 +00:00
Josh Lilly 068d5f1f97 Added instructions for build+install on FreeBSD 2019-11-01 16:36:10 +11:00
Josh Lilly b177039ee7 FreeBSD build + install support added to Makefile 2019-11-01 15:40:39 +11:00
Fufu Fang 5a36119289
Update README.md and CHANGELOG.md 2019-10-30 00:21:56 +00:00
Fufu Fang 12a2f87ada
Update documentation for sonic.c / sonic.h 2019-10-28 12:54:01 +00:00
Fufu Fang ea13c175cd
merged id3 non-root mode parser and index mode parser together 2019-10-28 12:54:01 +00:00
Fufu Fang 82b7bd337b
Update README.md 2019-10-28 12:53:52 +00:00
Fufu Fang 4b02980380
Added --config flag, updated CHANGELOG.md 2019-10-28 01:45:13 +00:00
Fufu Fang 55ad0cd9fc
converted sonic_id to a string, to support epoupon LMS 2019-10-28 01:09:55 +00:00
Fufu Fang e9c8689f8d
added -insecure_tls 2019-10-28 00:50:28 +00:00
Fufu Fang ea29af0e89
removed -g from Makefile 2019-10-28 00:27:03 +00:00
Fufu Fang c2eafdc6bf
Merge branch 'master' of github.com:fangfufu/httpdirfs 2019-10-28 00:21:04 +00:00
Fufu Fang 75da12bb80
fixed error in help 2019-10-28 00:20:59 +00:00
Fufu Fang 8da0c03eef
Update README.md 2019-10-28 00:20:34 +00:00
Fufu Fang 05776305cb
added --sonic-insecure authentication mode, now reports *sonic server errors 2019-10-28 00:16:44 +00:00
Fufu Fang ac3cea80d4
Merge pull request #47 from fangfufu/id3
Id3
2019-10-27 22:16:04 +00:00
Fufu Fang dd11e93238
Update README.md 2019-10-27 22:15:37 +00:00
Fufu Fang 48d6ae4144
updated README 2019-10-27 22:14:53 +00:00
Fufu Fang c2be88c6e4
Added flag to disable the check on server's support for HTTP range requests 2019-10-27 21:54:26 +00:00
Fufu Fang b7f25ca7ed
fixed regression of Sonic index mode 2019-10-27 21:33:58 +00:00
Fufu Fang ff1d34855c
Sonic ID3 mode is now working properly, but Sonic index mode stopped working 2019-10-27 21:21:30 +00:00
Fufu Fang 4b2ac94fe1
fixed linktable traversal 2019-10-27 11:31:25 +00:00
Fufu Fang 83f88dbe38
not sure why it crashes 2019-10-27 11:08:23 +00:00
Fufu Fang ef53cb83f6
Attempting to add ID3 support 2019-10-25 18:52:53 +01:00
Fufu Fang 2f920486be
Update README.md 2019-10-25 17:07:08 +01:00
Fufu Fang 8a4d47d71d
added flags for Sonic ID3 mode support 2019-10-25 17:07:08 +01:00
Fufu Fang 3bd7e67041
now enforce http range request check on *sonic server as well. 2019-10-25 17:06:55 +01:00
Fufu Fang 1105f8a0ba
removed spurious debugging messages 2019-10-25 03:07:36 +01:00
Fufu Fang 647b106a7c
fixed segfault if the root of the airsonic folder has music file 2019-10-25 02:55:41 +01:00
Fufu Fang 93b4711d75
removed some unnecessary compilation flags 2019-10-24 03:18:49 +01:00
Fufu Fang 0a5dd74b44
Minor documentation / stylistic updates 2019-10-24 03:15:30 +01:00
Fufu Fang 79d469d3b6
fixed --help text 2019-10-24 03:03:11 +01:00
Fufu Fang ad41cc2af7
Update README.md 2019-10-24 02:44:42 +01:00
Fufu Fang fa83a786be
Update README.md 2019-10-24 02:39:04 +01:00
Fufu Fang cf1d46edf4
fixed regression - cache system stopped working on regular http server
updated readme / help

Update README.md

Update README.md
2019-10-24 02:38:59 +01:00
Fufu Fang f3d5ffc3fc
now cache works on subsonic server 2019-10-24 02:15:05 +01:00
Fufu Fang 8206b4fa37
subsonic support is now added - no cache though 2019-10-24 00:57:37 +01:00
Fufu Fang a8ef8c88b5
added code to check if the server supports range requests 2019-10-24 00:44:18 +01:00
Fufu Fang cf700e5d3d
Changed sonic mode detection, fixed file listing 2019-10-23 22:34:46 +01:00
Fufu Fang f73643e32c
fixed a regression associated with invalid link detection 2019-10-23 22:10:33 +01:00
Fufu Fang 0f7623d1e7
succesfully mounted the filesystem, now need to actually download the music file 2019-10-23 21:36:08 +01:00
Fufu Fang 5062f511bd
Finished writing the code to generate Subsonic LinkTable
- Also refactored various bits and pieces
2019-10-23 21:04:25 +01:00
Fufu Fang b7c63f4418
renamed MemoryStruct to DataStruct, removed spurious link type detection logic 2019-10-22 20:26:21 +01:00
Fufu Fang eb27257e47
updated changelog 2019-10-22 01:55:55 +01:00
Fufu Fang cde4a13005
successfully downloading xml file from subsonic server 2019-10-22 01:53:28 +01:00
Fufu Fang ed8452a4a3
factored out network / root link table initialisation code 2019-10-22 01:49:53 +01:00
Fufu Fang dec32b0bb4
removed main.c's extra warning messages when doing exit(EXIT_FAILURE) 2019-10-22 01:13:28 +01:00
Fufu Fang 65a9e7f908
half way writing sonic_LinkTable_new
- now need to write the parser
2019-10-22 00:42:46 +01:00
Fufu Fang de2e5c457f
updated changelog 2019-10-21 23:33:51 +01:00
Fufu Fang 49e4dc7217
updated changelog 2019-10-21 23:32:29 +01:00
Fufu Fang fbc8d3f8b2
Prepare to merge with master 2019-10-21 23:17:13 +01:00
Fufu Fang ad093f4fc0
Merge branch 'master' into SubsonicFS 2019-10-21 23:16:19 +01:00
Fufu Fang 50ccaaf43c
Bump version to 1.1.10 2019-10-21 23:16:03 +01:00
Fufu Fang 1a9c10f783
more changes to the subsonic module
completed sonic_gen_auth_str()

completed sonic_gen_url_first_part()

change calloc to CALLOC (the wrapper function with error handling)
2019-10-21 23:12:02 +01:00
Fufu Fang eaabc877a0
added md5 checksum generation and salt generation 2019-10-21 02:11:54 +01:00
Steve Langasek 44150667f5 Ensure libraries linked are listed after objects using them
The Ubuntu toolchain uses -Wl,--as-needed by default, which causes
libraries to be dropped from the final binary if they aren't used.  For
portability, make sure that libraries are always listed on the linker
commandline /after/ the objects that reference them.
.
This also avoids passing -l options to the compiler when compiling .o files.
2019-09-09 21:04:11 -04:00
Fufu Fang f13d4bbcd3
modified: CHANGELOG.md 2019-09-04 20:00:34 +01:00
Fufu Fang bc23ee03a2
Fixed regression: LinkTable caching now works again. 2019-09-04 19:53:11 +01:00
Fufu Fang 1493190692
Improved HTTP temporary failure error handling
- Added HTTP response code for Cloudflare timeout
- Improved HTTP temporary failure error handling during LinkTable generation
- Now checked all HTTP response code in a single function
2019-09-04 18:42:59 +01:00
Fufu Fang ff67794b02
Now retry on HTTP 520 (Unknown error) 2019-09-04 17:57:15 +01:00
Fufu Fang 56e1095287
tidying stuff up 2019-09-04 17:43:30 +01:00
Fufu Fang aa4aae58b2
Added volatile into a variable, based on advice from andyhhp from SRCF.
[22:40] <andyhhp> curl_process_msgs()'s use of "static int slept" is dangerous and racy.  an optimising compiler can and probably will do bad things
[22:45] <ff266> with respect to "static int slept", should i just put a volatile in front of it? So "volatile static int slept"?
[22:46] <ff266> I meant "static volatile int slept;"
[22:47] <andyhhp> lets say yes for the sake of argument
[22:47] <andyhhp> "its complicated"
[22:47] <andyhhp> but that will broadly do what you want
2019-09-04 17:43:29 +01:00
Fufu Fang 79004cb7ee
andyhhp from SRCF told me to put "void" into functions that take no parameter. 2019-09-04 17:43:29 +01:00
Fufu Fang b6777c0478
Bugfix: No longer deadlock after encountering HTTP 429 while filling up a Linktable.
- Renamed some functions
- After initialise parse of the HTML file, files are no longer assigned as LINK_FILE. They are now assigned as LINK_UNINITIALISED_FILE.
- Link_req_file_stat() now crashes if the link type is other than LINK_UNINITIALISED_FILE.
2019-09-04 17:43:18 +01:00
Fufu Fang 367ce58e7f
change the maximum number of stack frames returned by backtrace() 2019-09-03 19:29:37 +01:00
Fufu Fang cf49bf86b8
improved LinkTable_fill() status message 2019-09-03 15:12:38 +01:00
Fufu Fang 656edbf578
improved error messages when mutex locking/unlocking fails 2019-09-03 14:59:30 +01:00
Fufu Fang c7dfa241d4
Backtrace will now be printed when the program crashes
- Note that static functions are not included in the printed backtrace.
2019-09-03 14:53:32 +01:00
Fufu Fang e971f9ab05
updated CHANGELOG.md 2019-09-03 14:04:53 +01:00
Fufu Fang 9ff099cd3a
added a status indicator when filling up the linktable 2019-09-03 14:02:41 +01:00
Fufu Fang 765f4e00d0
Updated Makefile, fixed issue #44
- When header files get changed, the relevant object will get recompiled.
2019-09-02 17:56:23 +01:00
Fufu Fang ee397d1513
Data_read() no longer gives warning messages when reaching the end of the cache file. 2019-09-02 16:51:42 +01:00
Fufu Fang 127c4ce651
updated changelog 2019-09-02 16:21:20 +01:00
Fufu Fang 4c0b7da34b
stop the background download thread from pre-fetching beyond EOF 2019-09-02 16:05:55 +01:00
Fufu Fang eb463478a8
The background download thread is being spawned again. 2019-09-02 15:47:10 +01:00
Fufu Fang 6c8a15d8cc
Fixed buffer over-read at the boundary.
- Say we are using a lock size of 1024k, we send a request for 128k at 1008k. It won't trigger the download, because we already download 1024k at 0. So it would read off from the empty disk space!
- This problem only occurs during the first time you download a file. During subsequent accesses, when you are only reading from the cache, this problem does not occur.
2019-09-02 15:19:41 +01:00
Fufu Fang 9e3e4747ae
fixed Cache_bgdl()
- Cache_bgdl() used to corrupt the cache file
2019-09-02 09:04:20 +01:00
Fufu Fang e06ea6dc06
Wrapped mutex lock / unlock functions into function rather than macro 2019-09-01 21:36:58 +01:00
Fufu Fang ed5457c76f
Bugfix: partially fixed the cache lock
- now when the same file is opened twice, the fread() output is consistent.
2019-09-01 11:39:47 +01:00
Fufu Fang 20f30a0e38
Tidied up some of the comments and formatting 2019-09-01 08:52:18 +01:00
Fufu Fang 378ca3363f
Merge branch 'master' of github.com:fangfufu/httpdirfs 2019-09-01 01:24:13 +01:00
Fufu Fang 044e3387e3
modified: CHANGELOG.md 2019-09-01 01:23:02 +01:00
Fufu Fang 1a44a4d960
Wrapped mutex locking and unlocking functions in error checking macro 2019-09-01 01:21:40 +01:00
Fufu Fang 67dc472fe6
Update CHANGELOG.md 2019-09-01 00:49:26 +01:00
Fufu Fang 55692cf511
Merge pull request #42 from fangfufu/cache_bug_fix
Cache system bug fix
2019-09-01 00:47:55 +01:00
Fufu Fang 92a9658c66
Cache system bug fix
- Now keep track of the number of times a file has been opened. The on-disk
cache file no longer gets opened multiple times, if a file is opened multiple
times.
2019-09-01 00:43:50 +01:00
Fufu Fang ef7630f9a8
Update CHANGELOG.md 2019-08-31 21:25:56 +01:00
Fufu Fang e447948762
Merge pull request #41 from fangfufu/cache_bug_fix
Directory listing performance improvement while file transfers are going on
2019-08-31 21:23:52 +01:00
Fufu Fang afb2a8fe6c
Directory listing performance improvement while file transfers are going on
- Added a LinkTable generation priority lock
- This allows LinkTable generation to be run exclusively. This
effectively gives LinkTable generation priority over file transfer.
2019-08-31 21:21:28 +01:00
Fufu Fang 1948bbd977
bump version number 2019-08-31 08:24:46 +01:00
Fufu Fang 3f7916e0ae
Update CHANGELOG.md 2019-08-31 08:24:21 +01:00
Fufu Fang f2549fb9e7
Update CHANGELOG.md 2019-08-31 08:23:22 +01:00
Fufu Fang d6fbcb4113 fixed issue #40
curl handles should NOT be added when there are transfers going on!!!
2019-08-31 08:10:36 +01:00
Jerome Charaoui a2587ca2c8 Update CHANGELOG, bump version 2019-08-30 13:10:12 -04:00
Jerome Charaoui 8f32c5b38f fix typo in manpage 2019-08-30 10:39:27 -04:00
Fufu Fang 600f3c3fe5 added more documentation 2019-08-27 10:52:46 +01:00
Fufu Fang 9a4a7b2c52 Updated README.md 2019-08-25 06:13:34 +01:00
Fufu Fang 242403098e
Update CHANGELOG.md 2019-08-24 18:18:53 +01:00
Fufu Fang 57044b6d6d Merge branch 'master' of github.com:fangfufu/httpdirfs 2019-08-24 18:14:48 +01:00
Fufu Fang 20577e516c updated README.me, suppress "-Wunused-function" in crypto lock function in network.c 2019-08-24 18:13:47 +01:00
Fufu Fang 97ecbffca0 Addressing linking error raised in issue #28
https://github.com/fangfufu/httpdirfs/issues/28#issuecomment-524497552
In Debian's GCC 9, the linker is senstive to the ordering of the
libraries and object files.
2019-08-24 17:42:06 +01:00
40 changed files with 17915 additions and 1775 deletions

4
.deepsource.toml Normal file
View File

@ -0,0 +1,4 @@
version = 1
[[analyzers]]
name = "cxx"

91
.github/workflows/codeql.yml vendored Normal file
View File

@ -0,0 +1,91 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
schedule:
- cron: '18 19 * * 1'
jobs:
analyze:
name: Analyze
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners
# Consider using larger runners for possible analysis time improvements.
runs-on: 'ubuntu-latest'
timeout-minutes: 360
permissions:
# required for all workflows
security-events: write
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
language: [ 'c-cpp' ]
# CodeQL supports [ 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift' ]
# Use only 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use only 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libgumbo-dev libfuse-dev libssl-dev \
libcurl4-openssl-dev uuid-dev help2man libexpat1-dev pkg-config \
autoconf
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

33
.gitignore vendored
View File

@ -1,5 +1,30 @@
*.o
*.kate-swp
html
mnt
# Binaries
httpdirfs
# Intermediates
*.o
.depend
# Documentation
doc/html
# Editor related
*.kate-swp
.vscode
*.c~
*.h~
# autotools
autom4te.cache
#Others
mnt
# Generated files
Doxyfile
Makefile
config.log
config.status
doc
src/.deps
src/.dirstamp

View File

@ -1,4 +0,0 @@
{
"name": "HTTPDirFS",
"files": [ { "git": 1 } ]
}

View File

@ -5,7 +5,135 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Fixed
- The refreshed LinkTable is now saved
(https://github.com/fangfufu/httpdirfs/issues/141).
- Only one LinkTable of the same directory is created when the cache mode is
enabled (https://github.com/fangfufu/httpdirfs/issues/140).
- Cache mode noe works correctly witht escaped URL
(https://github.com/fangfufu/httpdirfs/issues/138).
## Changed
- Improved LinkTable caching. LinkTable invalidation is now purely based on
timeout.
## [1.2.5] - 2023-02-24
### Fixed
- No longer compile with UBSAN enabled by default to avoid introducing
security vulnerability.
## [1.2.4] - 2023-01-11
### Added
- Add ``--cacert`` and ``--proxy-cacert`` options
### Fixed
- ``Link_download_full``: don't ``FREE(NULL)``
- Correct error message in ``FREE()``
- Error handling for ``fs_open`` and ``getopt_long``
- Fix IO error with funkwhale subsonic API
- Fix ``--insecure-tls`` in help and README
## [1.2.3] - 2021-08-31
### Added
- Single File Mode, which allows the mounting of a single file in a virtual
directory
- Manual page generation in Makefile.
### Changed
- Improve log / debug output.
- Removed unnecessary mutex lock/unlocks.
### Fixed
- Handling empty files from HTTP server
## [1.2.2] - 2021-08-08
### Fixed
- macOS uninstallation in Makefile.
- Filenames start with percentage encoding are now parsed properly
- For Apache server configured with IconsAreLinks, the duplicated link no longer
shows up.
## [1.2.1] - 2021-05-27
### Added
- macOS compilation support.
## [1.2.0] - 2019-11-01
### Added
- Subsonic server support - this is dedicated to my Debian package maintainer
Jerome Charaoui
- You can now specify which configuration file to use by using the ``--config``
flag.
- Added support for turning off TLS certificate check (``--insecure_tls`` flag).
- Now check for server's support for HTTP range request, which can be turned off
using the ``--no-range-check`` flag.
### Changed
- Wrapped all calloc() calls with error handling functions.
- Various code refactoring
### Fixed
- Remove the erroneous error messages when the user supplies wrong command line
options.
- The same cache folder is used, irrespective whether the server root URL ends
with '/'
- FreeBSD support
## [1.1.10] - 2019-09-10
### Added
- Added a progress indicator for LinkTable_fill().
- Backtrace will now be printed when the program crashes
- Note that static functions are not included in the printed backtrace!
### Changed
- Updated Makefile, fixed issue #44
- When header files get changed, the relevant object will get recompiled.
- Improved HTTP temporary failure error handling
- Now retry on the following HTTP error codes:
- 429 - Too Many Requests
- 520 - Cloudflare Unknown Error
- 524 - Cloudflare Timeout
### Fixed
- No longer deadlock after encountering HTTP 429 while filling up a Linktable.
- LinkTable caching now works again.
## [1.1.9] - 2019-09-02
### Changed
- Improved the performance of directory listing generation while there are
on-going file transfers
- Wrapped mutex locking and unlocking functions in error checking functions.
### Fixed
- Fixed issue #40 - Crashes with "API function called from within callback".
- Cache system: now keep track of the number of times a cache file has been
opened.
- The on-disk cache file no longer gets opened multiple times, if
a file is opened multiple times. This used to cause inconsistencies
between two opened cache files.
- Cache system: Fixed buffer over-read at the boundary.
- Say we are using a lock size of 1024k, we send a request for 128k at
1008k. It won't trigger the download, because we have already downloaded the
first 1024k at byte 0. So it would read off from the empty disk space!
- This problem only occurred during the first time you download a file.
During subsequent accesses, when you are only reading from the cache, this
problem did not occur.
- Cache system: Previously it was possible for Cache_bgdl()'s download offset
to be modified by the parent thread after the child thread had been
launched. This used to cause permanent cache file corruption.
- Cache system: Cache_bgdl() no longer prefetches beyond EOF.
- Cache system: Data_read() no longer gives warning messages when reaching the
end of the cache file.
## [1.1.8] - 2019-08-24
### Changed
- Suppressed "-Wunused-function" in ``network.c`` for network related functions.
### Fixed
- Addressed the link ordering problem raised in issue #28
## [1.1.7] - 2019-08-23
### Added
@ -20,7 +148,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed
- Now set a default cache directory
- path_append() now check for both the existing path and appended path for '/'.
- Now additionally set CURLMOPT_MAX_HOST_CONNECTIONS to limit the amount of connection HTTPDirFS makes.
- Now additionally set CURLMOPT_MAX_HOST_CONNECTIONS to limit the amount of
connection HTTPDirFS makes.
## [1.1.5] - 2019-04-26
### Added
@ -101,7 +230,16 @@ ${XDG_CONFIG_HOME}/httpdirfs, rather than ${HOME}/.httpdirfs
## [1.0] - 2018-08-22
- Initial release, everything works correctly, as far as I know.
[Unreleased]: https://github.com/fangfufu/httpdirfs/compare/1.1.7...HEAD
[Unreleased]: https://github.com/fangfufu/httpdirfs/compare/1.2.5...master
[1.2.5]: https://github.com/fangfufu/httpdirfs/compare/1.2.4...1.2.5
[1.2.4]: https://github.com/fangfufu/httpdirfs/compare/1.2.3...1.2.4
[1.2.3]: https://github.com/fangfufu/httpdirfs/compare/1.2.2...1.2.3
[1.2.2]: https://github.com/fangfufu/httpdirfs/compare/1.2.1...1.2.2
[1.2.1]: https://github.com/fangfufu/httpdirfs/compare/1.2.0...1.2.1
[1.2.0]: https://github.com/fangfufu/httpdirfs/compare/1.1.10...1.2.0
[1.1.10]: https://github.com/fangfufu/httpdirfs/compare/1.1.9...1.1.10
[1.1.9]: https://github.com/fangfufu/httpdirfs/compare/1.1.8...1.1.9
[1.1.8]: https://github.com/fangfufu/httpdirfs/compare/1.1.7...1.1.8
[1.1.7]: https://github.com/fangfufu/httpdirfs/compare/1.1.6...1.1.7
[1.1.6]: https://github.com/fangfufu/httpdirfs/compare/1.1.5...1.1.6
[1.1.5]: https://github.com/fangfufu/httpdirfs/compare/1.1.4...1.1.5

View File

@ -38,7 +38,7 @@ PROJECT_NAME = HTTPDirFS
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER =
PROJECT_NUMBER =
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
@ -51,14 +51,14 @@ PROJECT_BRIEF = "A filesystem which allows you to mount HTTP directory
# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
# the logo to the output directory.
PROJECT_LOGO =
PROJECT_LOGO =
# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
# into which the generated documentation will be written. If a relative path is
# entered, it will be relative to the location where doxygen was started. If
# left blank the current directory will be used.
OUTPUT_DIRECTORY =
OUTPUT_DIRECTORY =
# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
# directories (in 2 levels) under the output directory of each output format and
@ -162,7 +162,7 @@ FULL_PATH_NAMES = YES
# will be relative from the directory where doxygen is started.
# This tag requires that the tag FULL_PATH_NAMES is set to YES.
STRIP_FROM_PATH =
STRIP_FROM_PATH =
# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
# path mentioned in the documentation of a class, which tells the reader which
@ -171,7 +171,7 @@ STRIP_FROM_PATH =
# specify the list of include paths that are normally passed to the compiler
# using the -I flag.
STRIP_FROM_INC_PATH =
STRIP_FROM_INC_PATH =
# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
# less readable) file names. This can be useful is your file systems doesn't
@ -238,13 +238,13 @@ TAB_SIZE = 4
# "Side Effects:". You can put \n's in the value part of an alias to insert
# newlines.
ALIASES =
ALIASES =
# This tag can be used to specify a number of word-keyword mappings (TCL only).
# A mapping has the form "name=value". For example adding "class=itcl::class"
# will allow you to use the command class in the itcl::class meaning.
TCL_SUBST =
TCL_SUBST =
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For
@ -291,7 +291,7 @@ OPTIMIZE_OUTPUT_VHDL = NO
# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
# the files are not read by doxygen.
EXTENSION_MAPPING =
EXTENSION_MAPPING =
# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
# according to the Markdown format, which allows for more readable
@ -648,7 +648,7 @@ GENERATE_DEPRECATEDLIST= YES
# sections, marked by \if <section_label> ... \endif and \cond <section_label>
# ... \endcond blocks.
ENABLED_SECTIONS =
ENABLED_SECTIONS =
# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
# initial value of a variable or macro / define can have for it to appear in the
@ -690,7 +690,7 @@ SHOW_NAMESPACES = YES
# by doxygen. Whatever the program writes to standard output is used as the file
# version. For an example see the documentation.
FILE_VERSION_FILTER =
FILE_VERSION_FILTER =
# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
# by doxygen. The layout file controls the global structure of the generated
@ -703,7 +703,7 @@ FILE_VERSION_FILTER =
# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
# tag is left empty.
LAYOUT_FILE =
LAYOUT_FILE =
# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
# the reference definitions. This must be a list of .bib files. The .bib
@ -713,7 +713,7 @@ LAYOUT_FILE =
# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
# search path. See also \cite for info how to create references.
CITE_BIB_FILES =
CITE_BIB_FILES =
#---------------------------------------------------------------------------
# Configuration options related to warning and progress messages
@ -778,7 +778,7 @@ WARN_FORMAT = "$file:$line: $text"
# messages should be written. If left blank the output is written to standard
# error (stderr).
WARN_LOGFILE =
WARN_LOGFILE =
#---------------------------------------------------------------------------
# Configuration options related to the input files
@ -790,8 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = . \
src
INPUT = @srcdir@ @srcdir@/src
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
@ -874,7 +873,7 @@ RECURSIVE = NO
# Note that relative paths are relative to the directory from which doxygen is
# run.
EXCLUDE =
EXCLUDE =
# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
# directories that are symbolic links (a Unix file system feature) are excluded
@ -890,7 +889,7 @@ EXCLUDE_SYMLINKS = NO
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories for example use the pattern */test/*
EXCLUDE_PATTERNS =
EXCLUDE_PATTERNS =
# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
# (namespaces, classes, functions, etc.) that should be excluded from the
@ -901,13 +900,13 @@ EXCLUDE_PATTERNS =
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories use the pattern */test/*
EXCLUDE_SYMBOLS =
EXCLUDE_SYMBOLS = CALLOC lprintf FREE
# The EXAMPLE_PATH tag can be used to specify one or more files or directories
# that contain example code fragments that are included (see the \include
# command).
EXAMPLE_PATH =
EXAMPLE_PATH =
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
@ -927,7 +926,7 @@ EXAMPLE_RECURSIVE = NO
# that contain images that are to be included in the documentation (see the
# \image command).
IMAGE_PATH =
IMAGE_PATH =
# The INPUT_FILTER tag can be used to specify a program that doxygen should
# invoke to filter for each input file. Doxygen will invoke the filter program
@ -948,7 +947,7 @@ IMAGE_PATH =
# need to set EXTENSION_MAPPING for the extension otherwise the files are not
# properly processed by doxygen.
INPUT_FILTER =
INPUT_FILTER =
# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
# basis. Doxygen will compare the file name with each pattern and apply the
@ -961,7 +960,7 @@ INPUT_FILTER =
# need to set EXTENSION_MAPPING for the extension otherwise the files are not
# properly processed by doxygen.
FILTER_PATTERNS =
FILTER_PATTERNS =
# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
# INPUT_FILTER) will also be used to filter the input files that are used for
@ -976,14 +975,14 @@ FILTER_SOURCE_FILES = NO
# *.ext= (so without naming a filter).
# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
FILTER_SOURCE_PATTERNS =
FILTER_SOURCE_PATTERNS =
# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
# is part of the input, its contents will be placed on the main page
# (index.html). This can be useful if you have a project on for instance GitHub
# and want to reuse the introduction page also for the doxygen output.
USE_MDFILE_AS_MAINPAGE =
USE_MDFILE_AS_MAINPAGE = README.md
#---------------------------------------------------------------------------
# Configuration options related to source browsing
@ -1088,7 +1087,7 @@ CLANG_ASSISTED_PARSING = NO
# specified with INPUT and INCLUDE_PATH.
# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
CLANG_OPTIONS =
CLANG_OPTIONS =
#---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index
@ -1114,7 +1113,7 @@ COLS_IN_ALPHA_INDEX = 5
# while generating the index headers.
# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
IGNORE_PREFIX =
IGNORE_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the HTML output
@ -1158,7 +1157,7 @@ HTML_FILE_EXTENSION = .html
# of the possible markers and block names see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_HEADER =
HTML_HEADER =
# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
# generated HTML page. If the tag is left blank doxygen will generate a standard
@ -1168,7 +1167,7 @@ HTML_HEADER =
# that doxygen normally uses.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_FOOTER =
HTML_FOOTER =
# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
# sheet that is used by each HTML page. It can be used to fine-tune the look of
@ -1180,7 +1179,7 @@ HTML_FOOTER =
# obsolete.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_STYLESHEET =
HTML_STYLESHEET =
# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
# cascading style sheets that are included after the standard style sheets
@ -1193,7 +1192,7 @@ HTML_STYLESHEET =
# list). For an example see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_EXTRA_STYLESHEET =
HTML_EXTRA_STYLESHEET =
# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
# other source files which should be copied to the HTML output directory. Note
@ -1203,7 +1202,7 @@ HTML_EXTRA_STYLESHEET =
# files will be copied as-is; there are no commands or markers available.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_EXTRA_FILES =
HTML_EXTRA_FILES =
# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
# will adjust the colors in the style sheet and background images according to
@ -1332,7 +1331,7 @@ GENERATE_HTMLHELP = NO
# written to the html output directory.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
CHM_FILE =
CHM_FILE =
# The HHC_LOCATION tag can be used to specify the location (absolute path
# including file name) of the HTML help compiler (hhc.exe). If non-empty,
@ -1340,7 +1339,7 @@ CHM_FILE =
# The file has to be specified with full path.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
HHC_LOCATION =
HHC_LOCATION =
# The GENERATE_CHI flag controls if a separate .chi index file is generated
# (YES) or that it should be included in the master .chm file (NO).
@ -1353,7 +1352,7 @@ GENERATE_CHI = NO
# and project file content.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
CHM_INDEX_ENCODING =
CHM_INDEX_ENCODING =
# The BINARY_TOC flag controls whether a binary table of contents is generated
# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
@ -1384,7 +1383,7 @@ GENERATE_QHP = NO
# the HTML output folder.
# This tag requires that the tag GENERATE_QHP is set to YES.
QCH_FILE =
QCH_FILE =
# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
# Project output. For more information please see Qt Help Project / Namespace
@ -1409,7 +1408,7 @@ QHP_VIRTUAL_FOLDER = doc
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_NAME =
QHP_CUST_FILTER_NAME =
# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
# custom filter to add. For more information please see Qt Help Project / Custom
@ -1417,21 +1416,21 @@ QHP_CUST_FILTER_NAME =
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_ATTRS =
QHP_CUST_FILTER_ATTRS =
# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
# project's filter section matches. Qt Help Project / Filter Attributes (see:
# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_SECT_FILTER_ATTRS =
QHP_SECT_FILTER_ATTRS =
# The QHG_LOCATION tag can be used to specify the location of Qt's
# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
# generated .qhp file.
# This tag requires that the tag GENERATE_QHP is set to YES.
QHG_LOCATION =
QHG_LOCATION =
# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
# generated, together with the HTML files, they form an Eclipse help plugin. To
@ -1564,7 +1563,7 @@ MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest
# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_EXTENSIONS =
MATHJAX_EXTENSIONS =
# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
# of code that will be used on startup of the MathJax code. See the MathJax site
@ -1572,7 +1571,7 @@ MATHJAX_EXTENSIONS =
# example see the documentation.
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_CODEFILE =
MATHJAX_CODEFILE =
# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
# the HTML output. The underlying search engine uses javascript and DHTML and
@ -1632,7 +1631,7 @@ EXTERNAL_SEARCH = NO
# Searching" for details.
# This tag requires that the tag SEARCHENGINE is set to YES.
SEARCHENGINE_URL =
SEARCHENGINE_URL =
# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
# search data is written to a file for indexing by an external tool. With the
@ -1648,7 +1647,7 @@ SEARCHDATA_FILE = searchdata.xml
# projects and redirect the results back to the right project.
# This tag requires that the tag SEARCHENGINE is set to YES.
EXTERNAL_SEARCH_ID =
EXTERNAL_SEARCH_ID =
# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
# projects other than the one defined by this configuration file, but that are
@ -1658,7 +1657,7 @@ EXTERNAL_SEARCH_ID =
# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
# This tag requires that the tag SEARCHENGINE is set to YES.
EXTRA_SEARCH_MAPPINGS =
EXTRA_SEARCH_MAPPINGS =
#---------------------------------------------------------------------------
# Configuration options related to the LaTeX output
@ -1722,7 +1721,7 @@ PAPER_TYPE = a4
# If left blank no extra packages will be included.
# This tag requires that the tag GENERATE_LATEX is set to YES.
EXTRA_PACKAGES =
EXTRA_PACKAGES =
# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
# generated LaTeX document. The header should contain everything until the first
@ -1738,7 +1737,7 @@ EXTRA_PACKAGES =
# to HTML_HEADER.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_HEADER =
LATEX_HEADER =
# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
# generated LaTeX document. The footer should contain everything after the last
@ -1749,7 +1748,7 @@ LATEX_HEADER =
# Note: Only use a user-defined footer if you know what you are doing!
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_FOOTER =
LATEX_FOOTER =
# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined
# LaTeX style sheets that are included after the standard style sheets created
@ -1760,7 +1759,7 @@ LATEX_FOOTER =
# list).
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_EXTRA_STYLESHEET =
LATEX_EXTRA_STYLESHEET =
# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
# other source files which should be copied to the LATEX_OUTPUT output
@ -1768,7 +1767,7 @@ LATEX_EXTRA_STYLESHEET =
# markers available.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_EXTRA_FILES =
LATEX_EXTRA_FILES =
# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
@ -1876,14 +1875,14 @@ RTF_HYPERLINKS = NO
# default style sheet that doxygen normally uses.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_STYLESHEET_FILE =
RTF_STYLESHEET_FILE =
# Set optional variables used in the generation of an RTF document. Syntax is
# similar to doxygen's config file. A template extensions file can be generated
# using doxygen -e rtf extensionFile.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_EXTENSIONS_FILE =
RTF_EXTENSIONS_FILE =
# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code
# with syntax highlighting in the RTF output.
@ -1928,7 +1927,7 @@ MAN_EXTENSION = .3
# MAN_EXTENSION with the initial . removed.
# This tag requires that the tag GENERATE_MAN is set to YES.
MAN_SUBDIR =
MAN_SUBDIR =
# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
# will generate one additional man file for each entity documented in the real
@ -2041,7 +2040,7 @@ PERLMOD_PRETTY = YES
# overwrite each other's variables.
# This tag requires that the tag GENERATE_PERLMOD is set to YES.
PERLMOD_MAKEVAR_PREFIX =
PERLMOD_MAKEVAR_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the preprocessor
@ -2082,7 +2081,7 @@ SEARCH_INCLUDES = YES
# preprocessor.
# This tag requires that the tag SEARCH_INCLUDES is set to YES.
INCLUDE_PATH =
INCLUDE_PATH =
# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
# patterns (like *.h and *.hpp) to filter out the header-files in the
@ -2090,7 +2089,7 @@ INCLUDE_PATH =
# used.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
INCLUDE_FILE_PATTERNS =
INCLUDE_FILE_PATTERNS =
# The PREDEFINED tag can be used to specify one or more macro names that are
# defined before the preprocessor is started (similar to the -D option of e.g.
@ -2100,7 +2099,7 @@ INCLUDE_FILE_PATTERNS =
# recursively expanded use the := operator instead of the = operator.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
PREDEFINED =
PREDEFINED =
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The
@ -2109,7 +2108,7 @@ PREDEFINED =
# definition found in the source code.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
EXPAND_AS_DEFINED =
EXPAND_AS_DEFINED =
# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
# remove all references to function-like macros that are alone on a line, have
@ -2138,13 +2137,13 @@ SKIP_FUNCTION_MACROS = YES
# the path). If a tag file is not located in the directory in which doxygen is
# run, you must also specify the path to the tagfile here.
TAGFILES =
TAGFILES =
# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
# tag file that is based on the input files it reads. See section "Linking to
# external documentation" for more information about the usage of tag files.
GENERATE_TAGFILE =
GENERATE_TAGFILE =
# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
# the class index. If set to NO, only the inherited external classes will be
@ -2193,14 +2192,14 @@ CLASS_DIAGRAMS = YES
# the mscgen tool resides. If left empty the tool is assumed to be found in the
# default search path.
MSCGEN_PATH =
MSCGEN_PATH =
# You can include diagrams made with dia in doxygen documentation. Doxygen will
# then run dia to produce the diagram and insert it in the documentation. The
# DIA_PATH tag allows you to specify the directory where the dia binary resides.
# If left empty dia is assumed to be found in the default search path.
DIA_PATH =
DIA_PATH =
# If set to YES the inheritance and collaboration graphs will hide inheritance
# and usage relations if the target is undocumented or is not a class.
@ -2249,7 +2248,7 @@ DOT_FONTSIZE = 10
# the path where dot can find it using this tag.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_FONTPATH =
DOT_FONTPATH =
# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
# each documented class showing the direct and indirect inheritance relations.
@ -2395,26 +2394,26 @@ INTERACTIVE_SVG = YES
# found. If left blank, it is assumed the dot tool can be found in the path.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_PATH =
DOT_PATH =
# The DOTFILE_DIRS tag can be used to specify one or more directories that
# contain dot files that are included in the documentation (see the \dotfile
# command).
# This tag requires that the tag HAVE_DOT is set to YES.
DOTFILE_DIRS =
DOTFILE_DIRS =
# The MSCFILE_DIRS tag can be used to specify one or more directories that
# contain msc files that are included in the documentation (see the \mscfile
# command).
MSCFILE_DIRS =
MSCFILE_DIRS =
# The DIAFILE_DIRS tag can be used to specify one or more directories that
# contain dia files that are included in the documentation (see the \diafile
# command).
DIAFILE_DIRS =
DIAFILE_DIRS =
# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
# path where java can find the plantuml.jar file. If left blank, it is assumed
@ -2422,17 +2421,17 @@ DIAFILE_DIRS =
# generate a warning when it encounters a \startuml command in this case and
# will not generate output for the diagram.
PLANTUML_JAR_PATH =
PLANTUML_JAR_PATH =
# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a
# configuration file for plantuml.
PLANTUML_CFG_FILE =
PLANTUML_CFG_FILE =
# When using plantuml, the specified paths are searched for files specified by
# the !include statement in a plantuml block.
PLANTUML_INCLUDE_PATH =
PLANTUML_INCLUDE_PATH =
# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
# that will be shown in the graph. If the number of nodes in a graph becomes

View File

@ -1,40 +0,0 @@
VERSION=1.1.7
CFLAGS+= -g -O2 -Wall -Wextra -Wshadow\
-D_FILE_OFFSET_BITS=64 -DVERSION=\"$(VERSION)\" \
`pkg-config --cflags-only-I gumbo libcurl fuse`
LDFLAGS+= -pthread -lgumbo -lcurl -lfuse -lcrypto \
`pkg-config --libs-only-L gumbo libcurl fuse`
COBJS = main.o network.o fuse_local.o link.o cache.o util.o
prefix ?= /usr/local
all: httpdirfs
%.o: src/%.c
$(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
httpdirfs: $(COBJS)
$(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^
install:
install -m 755 -D httpdirfs \
$(DESTDIR)$(prefix)/bin/httpdirfs
install -m 644 -D doc/man/httpdirfs.1 \
$(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
doc:
doxygen Doxyfile
clean:
-rm -f *.o
-rm -f httpdirfs
-rm -rf doc/html
distclean: clean
uninstall:
-rm -f $(DESTDIR)$(prefix)/bin/httpdirfs
-rm -f $(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
.PHONY: all doc install clean distclean uninstall

36
Makefile.am Normal file
View File

@ -0,0 +1,36 @@
bin_PROGRAMS = httpdirfs
httpdirfs_SOURCES = src/main.c src/network.c src/fuse_local.c src/link.c \
src/cache.c src/util.c src/sonic.c src/log.c src/config.c src/memcache.c
# This has $(fuse_LIBS) in it because there's a bug in the fuse pkgconf:
# it should add -pthread to CFLAGS but doesn't.
# $(NUCLA) is explained in configure.ac.
CFLAGS = -g -O2 -Wall -Wextra -Wshadow $(NUCLA) \
-rdynamic -D_GNU_SOURCE -DVERSION=\"$(VERSION)\"\
$(pkgconf_CFLAGS) $(fuse_CFLAGS) $(fuse_LIBS)
LIBS += $(pkgconf_LIBS) $(fuse_LIBS)
man_MANS = doc/man/httpdirfs.1
CLEANFILES = doc/man/*
DISTCLEANFILES = doc/html/*
# %.o: $(srcdir)/src/%.c
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
# httpdirfs: $(COBJS)
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
man: doc/man/httpdirfs.1
doc/man/httpdirfs.1: httpdirfs
mkdir -p doc/man
rm -f doc/man/httpdirfs.1.tmp
help2man --name "mount HTTP directory as a virtual filesystem" \
--no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1.tmp
mv doc/man/httpdirfs.1.tmp doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
astyle --style=kr --align-pointer=name --max-code-length=80 src/*.c src/*.h
.PHONY: man doc format

934
Makefile.in Normal file
View File

@ -0,0 +1,934 @@
# Makefile.in generated by automake 1.16.5 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2021 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
am__is_gnu_make = { \
if test -z '$(MAKELEVEL)'; then \
false; \
elif test -n '$(MAKE_HOST)'; then \
true; \
elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
true; \
else \
false; \
fi; \
}
am__make_running_with_option = \
case $${target_option-} in \
?) ;; \
*) echo "am__make_running_with_option: internal error: invalid" \
"target option '$${target_option-}' specified" >&2; \
exit 1;; \
esac; \
has_opt=no; \
sane_makeflags=$$MAKEFLAGS; \
if $(am__is_gnu_make); then \
sane_makeflags=$$MFLAGS; \
else \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
bs=\\; \
sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
| sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
esac; \
fi; \
skip_next=no; \
strip_trailopt () \
{ \
flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
}; \
for flg in $$sane_makeflags; do \
test $$skip_next = yes && { skip_next=no; continue; }; \
case $$flg in \
*=*|--*) continue;; \
-*I) strip_trailopt 'I'; skip_next=yes;; \
-*I?*) strip_trailopt 'I';; \
-*O) strip_trailopt 'O'; skip_next=yes;; \
-*O?*) strip_trailopt 'O';; \
-*l) strip_trailopt 'l'; skip_next=yes;; \
-*l?*) strip_trailopt 'l';; \
-[dEDm]) skip_next=yes;; \
-[JT]) skip_next=yes;; \
esac; \
case $$flg in \
*$$target_option*) has_opt=yes; break;; \
esac; \
done; \
test $$has_opt = yes
am__make_dryrun = (target_option=n; $(am__make_running_with_option))
am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
bin_PROGRAMS = httpdirfs$(EXEEXT)
subdir = .
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \
$(am__configure_deps) $(am__DIST_COMMON)
am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
configure.lineno config.status.lineno
mkinstalldirs = $(install_sh) -d
CONFIG_CLEAN_FILES = Doxyfile
CONFIG_CLEAN_VPATH_FILES =
am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)"
PROGRAMS = $(bin_PROGRAMS)
am__dirstamp = $(am__leading_dot)dirstamp
am_httpdirfs_OBJECTS = src/main.$(OBJEXT) src/network.$(OBJEXT) \
src/fuse_local.$(OBJEXT) src/link.$(OBJEXT) \
src/cache.$(OBJEXT) src/util.$(OBJEXT) src/sonic.$(OBJEXT) \
src/log.$(OBJEXT) src/config.$(OBJEXT) src/memcache.$(OBJEXT)
httpdirfs_OBJECTS = $(am_httpdirfs_OBJECTS)
httpdirfs_LDADD = $(LDADD)
AM_V_P = $(am__v_P_@AM_V@)
am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
am__v_P_0 = false
am__v_P_1 = :
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
am__v_GEN_1 =
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
am__v_at_1 =
DEFAULT_INCLUDES = -I.@am__isrc@
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__maybe_remake_depfiles = depfiles
am__depfiles_remade = src/$(DEPDIR)/cache.Po src/$(DEPDIR)/config.Po \
src/$(DEPDIR)/fuse_local.Po src/$(DEPDIR)/link.Po \
src/$(DEPDIR)/log.Po src/$(DEPDIR)/main.Po \
src/$(DEPDIR)/memcache.Po src/$(DEPDIR)/network.Po \
src/$(DEPDIR)/sonic.Po src/$(DEPDIR)/util.Po
am__mv = mv -f
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
AM_V_CC = $(am__v_CC_@AM_V@)
am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@)
am__v_CC_0 = @echo " CC " $@;
am__v_CC_1 =
CCLD = $(CC)
LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@
AM_V_CCLD = $(am__v_CCLD_@AM_V@)
am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@)
am__v_CCLD_0 = @echo " CCLD " $@;
am__v_CCLD_1 =
SOURCES = $(httpdirfs_SOURCES)
DIST_SOURCES = $(httpdirfs_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
am__install_max = 40
am__nobase_strip_setup = \
srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
am__nobase_strip = \
for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
am__nobase_list = $(am__nobase_strip_setup); \
for p in $$list; do echo "$$p $$p"; done | \
sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
$(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
if (++n[$$2] == $(am__install_max)) \
{ print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
END { for (dir in files) print dir, files[dir] }'
am__base_list = \
sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
am__uninstall_files_from_dir = { \
test -z "$$files" \
|| { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
|| { echo " ( cd '$$dir' && rm -f" $$files ")"; \
$(am__cd) "$$dir" && rm -f $$files; }; \
}
man1dir = $(mandir)/man1
NROFF = nroff
MANS = $(man_MANS)
am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
# Read a list of newline-separated strings from the standard input,
# and print each of them once, without duplicates. Input order is
# *not* preserved.
am__uniquify_input = $(AWK) '\
BEGIN { nonempty = 0; } \
{ items[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in items) print i; }; } \
'
# Make sure the list of sources is unique. This is necessary because,
# e.g., the same source file might be shared among _SOURCES variables
# for different programs/libraries.
am__define_uniq_tagged_files = \
list='$(am__tagged_files)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | $(am__uniquify_input)`
AM_RECURSIVE_TARGETS = cscope
am__DIST_COMMON = $(srcdir)/Doxyfile.in $(srcdir)/Makefile.in \
README.md compile config.guess config.sub depcomp install-sh \
missing
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
distdir = $(PACKAGE)-$(VERSION)
top_distdir = $(distdir)
am__remove_distdir = \
if test -d "$(distdir)"; then \
find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \
&& rm -rf "$(distdir)" \
|| { sleep 5 && rm -rf "$(distdir)"; }; \
else :; fi
am__post_remove_distdir = $(am__remove_distdir)
DIST_ARCHIVES = $(distdir).tar.gz
GZIP_ENV = --best
DIST_TARGETS = dist-gzip
# Exists only to be overridden by the user if desired.
AM_DISTCHECK_DVI_TARGET = dvi
distuninstallcheck_listfiles = find . -type f -print
am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
| sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
distcleancheck_listfiles = find . -type f -print
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
# This has $(fuse_LIBS) in it because there's a bug in the fuse pkgconf:
# it should add -pthread to CFLAGS but doesn't.
# $(NUCLA) is explained in configure.ac.
CFLAGS = -g -O2 -Wall -Wextra -Wshadow $(NUCLA) \
-rdynamic -D_GNU_SOURCE -DVERSION=\"$(VERSION)\"\
$(pkgconf_CFLAGS) $(fuse_CFLAGS) $(fuse_LIBS)
CPPFLAGS = @CPPFLAGS@
CSCOPE = @CSCOPE@
CTAGS = @CTAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
ETAGS = @ETAGS@
EXEEXT = @EXEEXT@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@ $(pkgconf_LIBS) $(fuse_LIBS)
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
MKDIR_P = @MKDIR_P@
NUCLA = @NUCLA@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
VERSION = @VERSION@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_CC = @ac_ct_CC@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
fuse_CFLAGS = @fuse_CFLAGS@
fuse_LIBS = @fuse_LIBS@
host_alias = @host_alias@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
pkgconf_CFLAGS = @pkgconf_CFLAGS@
pkgconf_LIBS = @pkgconf_LIBS@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
runstatedir = @runstatedir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
httpdirfs_SOURCES = src/main.c src/network.c src/fuse_local.c src/link.c \
src/cache.c src/util.c src/sonic.c src/log.c src/config.c src/memcache.c
man_MANS = doc/man/httpdirfs.1
CLEANFILES = doc/man/*
DISTCLEANFILES = doc/html/*
all: all-am
.SUFFIXES:
.SUFFIXES: .c .o .obj
am--refresh: Makefile
@:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \
$(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --foreign Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
echo ' $(SHELL) ./config.status'; \
$(SHELL) ./config.status;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \
cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
$(SHELL) ./config.status --recheck
$(top_srcdir)/configure: $(am__configure_deps)
$(am__cd) $(srcdir) && $(AUTOCONF)
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
$(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
$(am__aclocal_m4_deps):
Doxyfile: $(top_builddir)/config.status $(srcdir)/Doxyfile.in
cd $(top_builddir) && $(SHELL) ./config.status $@
install-binPROGRAMS: $(bin_PROGRAMS)
@$(NORMAL_INSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(bindir)'"; \
$(MKDIR_P) "$(DESTDIR)$(bindir)" || exit 1; \
fi; \
for p in $$list; do echo "$$p $$p"; done | \
sed 's/$(EXEEXT)$$//' | \
while read p p1; do if test -f $$p \
; then echo "$$p"; echo "$$p"; else :; fi; \
done | \
sed -e 'p;s,.*/,,;n;h' \
-e 's|.*|.|' \
-e 'p;x;s,.*/,,;s/$(EXEEXT)$$//;$(transform);s/$$/$(EXEEXT)/' | \
sed 'N;N;N;s,\n, ,g' | \
$(AWK) 'BEGIN { files["."] = ""; dirs["."] = 1 } \
{ d=$$3; if (dirs[d] != 1) { print "d", d; dirs[d] = 1 } \
if ($$2 == $$4) files[d] = files[d] " " $$1; \
else { print "f", $$3 "/" $$4, $$1; } } \
END { for (d in files) print "f", d, files[d] }' | \
while read type dir files; do \
if test "$$dir" = .; then dir=; else dir=/$$dir; fi; \
test -z "$$files" || { \
echo " $(INSTALL_PROGRAM_ENV) $(INSTALL_PROGRAM) $$files '$(DESTDIR)$(bindir)$$dir'"; \
$(INSTALL_PROGRAM_ENV) $(INSTALL_PROGRAM) $$files "$(DESTDIR)$(bindir)$$dir" || exit $$?; \
} \
; done
uninstall-binPROGRAMS:
@$(NORMAL_UNINSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
files=`for p in $$list; do echo "$$p"; done | \
sed -e 'h;s,^.*/,,;s/$(EXEEXT)$$//;$(transform)' \
-e 's/$$/$(EXEEXT)/' \
`; \
test -n "$$list" || exit 0; \
echo " ( cd '$(DESTDIR)$(bindir)' && rm -f" $$files ")"; \
cd "$(DESTDIR)$(bindir)" && rm -f $$files
clean-binPROGRAMS:
-test -z "$(bin_PROGRAMS)" || rm -f $(bin_PROGRAMS)
src/$(am__dirstamp):
@$(MKDIR_P) src
@: > src/$(am__dirstamp)
src/$(DEPDIR)/$(am__dirstamp):
@$(MKDIR_P) src/$(DEPDIR)
@: > src/$(DEPDIR)/$(am__dirstamp)
src/main.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/network.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/fuse_local.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/link.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/cache.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/util.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/sonic.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/log.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/config.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/memcache.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
httpdirfs$(EXEEXT): $(httpdirfs_OBJECTS) $(httpdirfs_DEPENDENCIES) $(EXTRA_httpdirfs_DEPENDENCIES)
@rm -f httpdirfs$(EXEEXT)
$(AM_V_CCLD)$(LINK) $(httpdirfs_OBJECTS) $(httpdirfs_LDADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
-rm -f src/*.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/cache.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/config.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/fuse_local.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/link.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/log.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/main.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/memcache.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/network.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/sonic.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/util.Po@am__quote@ # am--include-marker
$(am__depfiles_remade):
@$(MKDIR_P) $(@D)
@echo '# dummy' >$@-t && $(am__mv) $@-t $@
am--depfiles: $(am__depfiles_remade)
.c.o:
@am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\
@am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $<
.c.obj:
@am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.obj$$||'`;\
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ `$(CYGPATH_W) '$<'` &&\
@am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
install-man1: $(man_MANS)
@$(NORMAL_INSTALL)
@list1=''; \
list2='$(man_MANS)'; \
test -n "$(man1dir)" \
&& test -n "`echo $$list1$$list2`" \
|| exit 0; \
echo " $(MKDIR_P) '$(DESTDIR)$(man1dir)'"; \
$(MKDIR_P) "$(DESTDIR)$(man1dir)" || exit 1; \
{ for i in $$list1; do echo "$$i"; done; \
if test -n "$$list2"; then \
for i in $$list2; do echo "$$i"; done \
| sed -n '/\.1[a-z]*$$/p'; \
fi; \
} | while read p; do \
if test -f $$p; then d=; else d="$(srcdir)/"; fi; \
echo "$$d$$p"; echo "$$p"; \
done | \
sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
-e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \
sed 'N;N;s,\n, ,g' | { \
list=; while read file base inst; do \
if test "$$base" = "$$inst"; then list="$$list $$file"; else \
echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man1dir)/$$inst'"; \
$(INSTALL_DATA) "$$file" "$(DESTDIR)$(man1dir)/$$inst" || exit $$?; \
fi; \
done; \
for i in $$list; do echo "$$i"; done | $(am__base_list) | \
while read files; do \
test -z "$$files" || { \
echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man1dir)'"; \
$(INSTALL_DATA) $$files "$(DESTDIR)$(man1dir)" || exit $$?; }; \
done; }
uninstall-man1:
@$(NORMAL_UNINSTALL)
@list=''; test -n "$(man1dir)" || exit 0; \
files=`{ for i in $$list; do echo "$$i"; done; \
l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \
sed -n '/\.1[a-z]*$$/p'; \
} | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
-e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \
dir='$(DESTDIR)$(man1dir)'; $(am__uninstall_files_from_dir)
ID: $(am__tagged_files)
$(am__define_uniq_tagged_files); mkid -fID $$unique
tags: tags-am
TAGS: tags
tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
set x; \
here=`pwd`; \
$(am__define_uniq_tagged_files); \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: ctags-am
CTAGS: ctags
ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
$(am__define_uniq_tagged_files); \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscope: cscope.files
test ! -s cscope.files \
|| $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS)
clean-cscope:
-rm -f cscope.files
cscope.files: clean-cscope cscopelist
cscopelist: cscopelist-am
cscopelist-am: $(am__tagged_files)
list='$(am__tagged_files)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
-rm -f cscope.out cscope.in.out cscope.po.out cscope.files
distdir: $(BUILT_SOURCES)
$(MAKE) $(AM_MAKEFLAGS) distdir-am
distdir-am: $(DISTFILES)
$(am__remove_distdir)
test -d "$(distdir)" || mkdir "$(distdir)"
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
-test -n "$(am__skip_mode_fix)" \
|| find "$(distdir)" -type d ! -perm -755 \
-exec chmod u+rwx,go+rx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \; -o \
! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
|| chmod -R a+r "$(distdir)"
dist-gzip: distdir
tardir=$(distdir) && $(am__tar) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).tar.gz
$(am__post_remove_distdir)
dist-bzip2: distdir
tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2
$(am__post_remove_distdir)
dist-lzip: distdir
tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz
$(am__post_remove_distdir)
dist-xz: distdir
tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz
$(am__post_remove_distdir)
dist-zstd: distdir
tardir=$(distdir) && $(am__tar) | zstd -c $${ZSTD_CLEVEL-$${ZSTD_OPT--19}} >$(distdir).tar.zst
$(am__post_remove_distdir)
dist-tarZ: distdir
@echo WARNING: "Support for distribution archives compressed with" \
"legacy program 'compress' is deprecated." >&2
@echo WARNING: "It will be removed altogether in Automake 2.0" >&2
tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
$(am__post_remove_distdir)
dist-shar: distdir
@echo WARNING: "Support for shar distribution archives is" \
"deprecated." >&2
@echo WARNING: "It will be removed altogether in Automake 2.0" >&2
shar $(distdir) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).shar.gz
$(am__post_remove_distdir)
dist-zip: distdir
-rm -f $(distdir).zip
zip -rq $(distdir).zip $(distdir)
$(am__post_remove_distdir)
dist dist-all:
$(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:'
$(am__post_remove_distdir)
# This target untars the dist file and tries a VPATH configuration. Then
# it guarantees that the distribution is self-contained by making another
# tarfile.
distcheck: dist
case '$(DIST_ARCHIVES)' in \
*.tar.gz*) \
eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).tar.gz | $(am__untar) ;;\
*.tar.bz2*) \
bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\
*.tar.lz*) \
lzip -dc $(distdir).tar.lz | $(am__untar) ;;\
*.tar.xz*) \
xz -dc $(distdir).tar.xz | $(am__untar) ;;\
*.tar.Z*) \
uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
*.shar.gz*) \
eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\
*.zip*) \
unzip $(distdir).zip ;;\
*.tar.zst*) \
zstd -dc $(distdir).tar.zst | $(am__untar) ;;\
esac
chmod -R a-w $(distdir)
chmod u+w $(distdir)
mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst
chmod a-w $(distdir)
test -d $(distdir)/_build || exit 0; \
dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
&& dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
&& am__cwd=`pwd` \
&& $(am__cd) $(distdir)/_build/sub \
&& ../../configure \
$(AM_DISTCHECK_CONFIGURE_FLAGS) \
$(DISTCHECK_CONFIGURE_FLAGS) \
--srcdir=../.. --prefix="$$dc_install_base" \
&& $(MAKE) $(AM_MAKEFLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) $(AM_DISTCHECK_DVI_TARGET) \
&& $(MAKE) $(AM_MAKEFLAGS) check \
&& $(MAKE) $(AM_MAKEFLAGS) install \
&& $(MAKE) $(AM_MAKEFLAGS) installcheck \
&& $(MAKE) $(AM_MAKEFLAGS) uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
distuninstallcheck \
&& chmod -R a-w "$$dc_install_base" \
&& ({ \
(cd ../.. && umask 077 && mkdir "$$dc_destdir") \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
} || { rm -rf "$$dc_destdir"; exit 1; }) \
&& rm -rf "$$dc_destdir" \
&& $(MAKE) $(AM_MAKEFLAGS) dist \
&& rm -rf $(DIST_ARCHIVES) \
&& $(MAKE) $(AM_MAKEFLAGS) distcleancheck \
&& cd "$$am__cwd" \
|| exit 1
$(am__post_remove_distdir)
@(echo "$(distdir) archives ready for distribution: "; \
list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
distuninstallcheck:
@test -n '$(distuninstallcheck_dir)' || { \
echo 'ERROR: trying to run $@ with an empty' \
'$$(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
$(am__cd) '$(distuninstallcheck_dir)' || { \
echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left after uninstall:" ; \
if test -n "$(DESTDIR)"; then \
echo " (check DESTDIR support)"; \
fi ; \
$(distuninstallcheck_listfiles) ; \
exit 1; } >&2
distcleancheck: distclean
@if test '$(srcdir)' = . ; then \
echo "ERROR: distcleancheck can only run from a VPATH build" ; \
exit 1 ; \
fi
@test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left in build directory after distclean:" ; \
$(distcleancheck_listfiles) ; \
exit 1; } >&2
check-am: all-am
check: check-am
all-am: Makefile $(PROGRAMS) $(MANS)
installdirs:
for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
-test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
-rm -f src/$(DEPDIR)/$(am__dirstamp)
-rm -f src/$(am__dirstamp)
-test -z "$(DISTCLEANFILES)" || rm -f $(DISTCLEANFILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-binPROGRAMS clean-generic mostlyclean-am
distclean: distclean-am
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -f src/$(DEPDIR)/cache.Po
-rm -f src/$(DEPDIR)/config.Po
-rm -f src/$(DEPDIR)/fuse_local.Po
-rm -f src/$(DEPDIR)/link.Po
-rm -f src/$(DEPDIR)/log.Po
-rm -f src/$(DEPDIR)/main.Po
-rm -f src/$(DEPDIR)/memcache.Po
-rm -f src/$(DEPDIR)/network.Po
-rm -f src/$(DEPDIR)/sonic.Po
-rm -f src/$(DEPDIR)/util.Po
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am: install-man
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am: install-binPROGRAMS
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man: install-man1
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -rf $(top_srcdir)/autom4te.cache
-rm -f src/$(DEPDIR)/cache.Po
-rm -f src/$(DEPDIR)/config.Po
-rm -f src/$(DEPDIR)/fuse_local.Po
-rm -f src/$(DEPDIR)/link.Po
-rm -f src/$(DEPDIR)/log.Po
-rm -f src/$(DEPDIR)/main.Po
-rm -f src/$(DEPDIR)/memcache.Po
-rm -f src/$(DEPDIR)/network.Po
-rm -f src/$(DEPDIR)/sonic.Po
-rm -f src/$(DEPDIR)/util.Po
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-binPROGRAMS uninstall-man
uninstall-man: uninstall-man1
.MAKE: install-am install-strip
.PHONY: CTAGS GTAGS TAGS all all-am am--depfiles am--refresh check \
check-am clean clean-binPROGRAMS clean-cscope clean-generic \
cscope cscopelist-am ctags ctags-am dist dist-all dist-bzip2 \
dist-gzip dist-lzip dist-shar dist-tarZ dist-xz dist-zip \
dist-zstd distcheck distclean distclean-compile \
distclean-generic distclean-tags distcleancheck distdir \
distuninstallcheck dvi dvi-am html html-am info info-am \
install install-am install-binPROGRAMS install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am install-info \
install-info-am install-man install-man1 install-pdf \
install-pdf-am install-ps install-ps-am install-strip \
installcheck installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-compile \
mostlyclean-generic pdf pdf-am ps ps-am tags tags-am uninstall \
uninstall-am uninstall-binPROGRAMS uninstall-man \
uninstall-man1
.PRECIOUS: Makefile
# %.o: $(srcdir)/src/%.c
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
# httpdirfs: $(COBJS)
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
man: doc/man/httpdirfs.1
doc/man/httpdirfs.1: httpdirfs
mkdir -p doc/man
rm -f doc/man/httpdirfs.1.tmp
help2man --name "mount HTTP directory as a virtual filesystem" \
--no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1.tmp
mv doc/man/httpdirfs.1.tmp doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
astyle --style=kr --align-pointer=name --max-code-length=80 src/*.c src/*.h
.PHONY: man doc format
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

349
README.md
View File

@ -1,56 +1,231 @@
# HTTPDirFS - now with a permanent cache
[![CodeQL](https://github.com/fangfufu/httpdirfs/actions/workflows/codeql.yml/badge.svg)](https://github.com/fangfufu/httpdirfs/actions/workflows/codeql.yml)
[![CodeFactor](https://www.codefactor.io/repository/github/fangfufu/httpdirfs/badge)](https://www.codefactor.io/repository/github/fangfufu/httpdirfs)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/30af0a5b4d6f4a4d83ddb68f5193ad23)](https://app.codacy.com/gh/fangfufu/httpdirfs/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=fangfufu_httpdirfs&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=fangfufu_httpdirfs)
# HTTPDirFS - HTTP Directory Filesystem with a permanent cache, and Airsonic / Subsonic server support!
Have you ever wanted to mount those HTTP directory listings as if it was a
partition? Look no further, this is your solution. HTTPDirFS stands for Hyper
Text Transfer Protocol Directory Filesystem
Text Transfer Protocol Directory Filesystem.
The performance of the program is excellent, due to the use of curl-multi
interface. HTTP connections are reused, and HTTP pipelining is used when
available. The FUSE component itself also runs in multithreaded mode.
The performance of the program is excellent. HTTP connections are reused through
curl-multi interface. The FUSE component runs in the multithreaded mode.
The permanent cache system caches all the files you have downloaded, so you
don't need to download those files again if you later access them again. This
feature is triggered by the ``--cache`` flag. This makes this filesystem much
faster than ``rclone mount``.
There is a permanent cache system which can cache all the file segments you have
downloaded, so you don't need to these segments again if you access them later.
This feature is triggered by the ``--cache`` flag. This is similar to the
``--vfs-cache-mode full`` feature of
[rclone mount](https://rclone.org/commands/rclone_mount/#vfs-cache-mode-full)
There is support for Airsonic / Subsonic server. This allows you to mount a
remote music collection locally.
If you only want to access a single file, there is also a simplified
Single File Mode. This can be especially useful if the web server does not
present a HTTP directory listing.
## Installation
Please note if you install HTTDirFS from a repository, it can be outdated.
### Debian 12 "Bookworm"
HTTPDirFS is available as a package in Debian 12 "Bookworm", If you are on
Debian Bookworm, you can simply run the following
command as ``root``:
apt install httpdirfs
For more information on the status of HTTDirFS in Debian, please refer to
[Debian package tracker](https://tracker.debian.org/pkg/httpdirfs-fuse)
### Arch Linux
HTTPDirFS is available in the
[Arch User Repository](https://aur.archlinux.org/packages/httpdirfs).
### FreeBSD
HTTPDirFS is available in the
[FreeBSD Ports Collection](https://www.freshports.org/sysutils/fusefs-httpdirfs/).
## Compilation
### Ubuntu
Under Ubuntu 22.04 LTS, you need the following packages:
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev help2man
libexpat1-dev pkg-config autoconf
### Debian 12 "Bookworm"
Under Debian 12 "Bookworm" and newer versions, you need the following packages:
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev help2man
libexpat1-dev pkg-config autoconf
### FreeBSD
The following dependencies are required from either pkg or ports:
Packages:
gmake fusefs-libs gumbo e2fsprogs-libuuid curl expat pkgconf help2man
If you want to be ableto build the documentation ("gmake doc") you also need
doxygen (devel/doxygen).
Ports:
devel/gmake sysutils/fusefs-libs devel/gumbo misc/e2fsprogs-libuuid ftp/curl textproc/expat2 devel/pkgconf devel/doxygen misc/help2man
**Note:** If you want brotli compression support, you will need to install curl
from ports and enable the option.
You can then build + install with:
./configure
gmake
sudo gmake install
Alternatively, you may use the FreeBSD [ports(7)](https://man.freebsd.org/ports/7)
infrastructure to build HTTPDirFS from source with the modifications you need.
### macOS
You need to install some packages from Homebrew:
brew install macfuse curl gumbo-parser openssl pkg-config help2man
If you want to be able to build the documentation ("make doc") you also need
help2man, doxygen, and graphviz.
Build and install:
./configure
make
sudo make install
Apple's command-line build tools are usually installed as part of setting up
Homebrew. HTTPDirFS will be installed in ``/usr/local``.
## Usage
./httpdirfs -f --cache -f $URL $YOUR_MOUNT_POINT
./httpdirfs -f --cache $URL $MOUNT_POINT
An example URL would be
[Debian CD Image Server](https://cdimage.debian.org/debian-cd/). The ``-f`` flag
keeps the program in the foreground, which is useful for monitoring which URL
the filesystem is visiting.
Useful options:
### Useful options
HTTPDirFS options:
-f foreground operation
-s disable multi-threaded operation
-u --username HTTP authentication username
-p --password HTTP authentication password
-P --proxy Proxy for libcurl, for more details refer to
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html
--proxy-username Username for the proxy
--proxy-password Password for the proxy
--cache Enable cache, by default this is disabled
--cache-location Set a custom cache location, by default it is
located in ${XDG_CACHE_HOME}/httpdirfs
--dl-seg-size The size of each download segment in MB,
default to 8MB.
--max-seg-count The maximum number of download segments a file
can have. By default it is set to 128*1024. This
means the maximum memory usage per file is 128KB.
This allows caching file up to 1TB in size,
assuming you are using the default segment size.
--max-conns The maximum number of network connections that
libcurl is allowed to make, default to 10.
--retry-wait The waiting interval in seconds before making an
HTTP request, after encountering an error,
default to 5 seconds.
--user-agent The user agent string, default to "HTTPDirFS".
--cache Enable cache (default: off)
--cache-location Set a custom cache location
(default: "${XDG_CACHE_HOME}/httpdirfs")
--dl-seg-size Set cache download segment size, in MB (default: 8)
Note: this setting is ignored if previously
cached data is found for the requested file.
--max-seg-count Set maximum number of download segments a file
can have. (default: 128*1024)
With the default setting, the maximum memory usage
per file is 128KB. This allows caching files up
to 1TB in size using the default segment size.
--max-conns Set maximum number of network connections that
libcurl is allowed to make. (default: 10)
--retry-wait Set delay in seconds before retrying an HTTP request
after encountering an error. (default: 5)
--user-agent Set user agent string (default: "HTTPDirFS")
--no-range-check Disable the built-in check for the server's support
for HTTP range requests
--insecure-tls Disable licurl TLS certificate verification by
setting CURLOPT_SSL_VERIFYHOST to 0
--single-file-mode Single file mode - rather than mounting a whole
directory, present a single file inside a virtual
directory.
For mounting a Airsonic / Subsonic server:
--sonic-username The username for your Airsonic / Subsonic server
--sonic-password The password for your Airsonic / Subsonic server
--sonic-id3 Enable ID3 mode - this present the server content in
Artist/Album/Song layout
--sonic-insecure Authenticate against your Airsonic / Subsonic server
using the insecure username / hex encoded password
scheme
Useful FUSE options:
-d -o debug enable debug output (implies -f)
-f foreground operation
-s disable multi-threaded operation
## Airsonic / Subsonic server support
The Airsonic / Subsonic server support is dedicated the my Debian package
maintainer Jerome Charaoui.You can mount the music collection on your
Airsonic / Subsonic server (*sonic), and browse them using your favourite file
browser.
You simply have to supply both ``--sonic-username`` and ``--sonic-password`` to
trigger the *sonic server mode. For example:
./httpdirfs -f --cache --sonic-username $USERNAME --sonic-password $PASSWORD $URL $MOUNT_POINT
You definitely want to enable the cache for this one, otherwise it is painfully
slow.
There are two ways of mounting your *sonic server
- the index mode
- and the ID3 mode.
In the index mode, the filesystem is presented based on the listing on the
``Index`` link in your *sonic's home page.
In ID3 mode, the filesystem is presented using the following hierarchy:
0. Root
1. Alphabetical indices of the artists' names
2. The arists' names
3. All of the albums by a single artist
4. All the songs in an album.
By default, *sonic server is mounted in the index mode. If you want to mount in
ID3 mode, please use the ``--sonic-id3`` flag.
Please note that the cache feature is unaffected by how you mount your *sonic
server. If you mounted your server in index mode, the cache is still valid in
ID3 mode, and vice versa.
HTTPDirFS is also known to work with the following applications, which implement
some or all of Subsonic API:
- [Funkwhale](https://funkwhale.audio/) (requires ``--sonic-id3`` and
``--no-range-check``, more information in
[issue #45](https://github.com/fangfufu/httpdirfs/issues/45))
- [LMS](https://github.com/epoupon/lms) (requires ``--sonic-insecure`` and
``--no-range-check``, more information in
[issue #46](https://github.com/fangfufu/httpdirfs/issues/46). To mount the
[demo instance](https://lms.demo.poupon.io/), you might also need
``--insecure-tls``)
- [Navidrome](https://github.com/navidrome/navidrome), more information in
[issue #51](https://github.com/fangfufu/httpdirfs/issues/51).
## Single file mode
If you just want to access a single file, you can specify
``--single-file-mode``. This effectively creates a virtual directory that
contains one single file. This operating mode is similar to the unmaintained
[httpfs](http://httpfs.sourceforge.net/).
e.g.
./httpdirfs -f --cache --single-file-mode https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-11.0.0-amd64-netinst.iso mnt
This can be useful if the web server does not present a HTTP directory listing.
This feature was implemented due to Github
[issue #86](https://github.com/fangfufu/httpdirfs/issues/86)
## Permanent cache system
You can now cache all the files you have looked at permanently on your hard
drive by using the ``--cache`` flag. The file it caches persist across sessions
You can cache the files you have accessed permanently on your hard drive by
using the ``--cache`` flag. The file it caches persist across sessions.
By default, the cache files are stored under ``${XDG_CACHE_HOME}/httpdirfs``,
which by default is ``${HOME}/.cache/httpdirfs``. Each HTTP directory gets its
@ -64,93 +239,63 @@ maximum download speed is around 15MiB/s, as measured using my localhost as the
web server. However after you have accessed a file once, accessing it again will
be the same speed as accessing your hard drive.
If you have any patches to make the initial download go faster, feel free to
submit a pull request.
If you have any patches to make the initial download go faster, please submit a
pull request.
The permanent cache system also relies on sparse allocation. Please make sure
your filesystem supports it. Otherwise your hard drive / SSD might grind to
a halt.For a list of filesystem that supports sparse allocation, please refer to
The permanent cache system relies on sparse allocation. Please make sure your
filesystem supports it. Otherwise your hard drive / SSD will get heavy I/O from
cache file creation. For a list of filesystem that supports sparse allocation,
please refer to
[Wikipedia](https://en.wikipedia.org/wiki/Comparison_of_file_systems#Allocation_and_layout_policies).
## Configuration file support
There is now rudimentary config file support. The configuration file that the
program will read is ``${XDG_CONFIG_HOME}/httpdirfs/config``.
If ``${XDG_CONFIG_HOME}`` is not set, it will default to ``${HOME}/.config``. So
by default you need to put the configuration file at
``${HOME}/.config/httpdirfs/config``. You will have to create the sub-directory
and the configuration file yourself. In the configuration file, please supply
one option per line. For example:
This program has basic support for using a configuration file. By default, the
configuration file which the program reads is
``${XDG_CONFIG_HOME}/httpdirfs/config``, which by
default is at ``${HOME}/.config/httpdirfs/config``. You will have to create the
sub-directory and the configuration file yourself. In the configuration file,
please supply one option per line. For example:
$ cat ${HOME}/.config/httpdirfs/config
--username test
--password test
-f
## Compilation
This program was developed under Debian Stretch. If you are using the same
operating system as me, you need ``libgumbo-dev``, ``libfuse-dev``,
``libssl1.0-dev`` and ``libcurl4-openssl-dev``.
If you run Debian Stretch, and you have OpenSSL 1.0.2 installed, and you get
warnings that look like below during compilation,
Alternatively, you can specify your own configuration file by using the
``--config`` option.
network.c:70:22: warning: thread_id defined but not used [-Wunused-function]
static unsigned long thread_id(void)
^~~~~~~~~
network.c:57:13: warning: lock_callback defined but not used [-Wunused-function]
static void lock_callback(int mode, int type, char *file, int line)
^~~~~~~~~~~~~
/usr/bin/ld: warning: libcrypto.so.1.0.2, needed by /usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/libcurl.so, may conflict with libcrypto.so.1.1
### Log levels
You can control how much log HTTPDirFS outputs by setting the
``HTTPDIRFS_LOG_LEVEL`` environmental variable. For details of the different
types of log that are supported, please refer to
[log.h](https://github.com/fangfufu/httpdirfs/blob/master/src/log.h) and
[log.c](https://github.com/fangfufu/httpdirfs/blob/master/src/log.c).
then you need to check if ``libssl1.0-dev`` had been installed properly. If you
get these compilation warnings, this program will ocassionally crash if you
connect to HTTPS website. This is because OpenSSL 1.0.2 needs those functions
for thread safety, whereas OpenSSL 1.1 does not. If you have ``libssl-dev``
rather than ``libssl1.0-dev`` installed, those call back functions will not be
linked properly.
If you have OpenSSL 1.1 and the associated development headers installed, then
you can safely ignore these warning messages. If you are on Debian Buster, you
will definitely get these warning messages, and you can safely ignore them.
### Debugging Mutexes
By default the debugging output associated with mutexes are not compiled. To enable them, compile the program using the following command:
make CPPFLAGS=-DLOCK_DEBUG
## SSL Support
If you run the program in the foreground, when it starts up, it will output the
SSL engine version string. Please verify that your libcurl is linked against
OpenSSL, as the pthread mutex functions are designed for OpenSSL.
The SSL engine version string looks something like this:
libcurl SSL engine: OpenSSL/1.0.2l
## The Technical Details
I noticed that most HTTP directory listings don't provide the file size for the
web page itself. I suppose this makes perfect sense, as they are generated on
the fly. Whereas the actual files have got file sizes. So the listing pages can
be treated as folders, and the rest are files.
This program downloads the HTML web pages/files using
[libcurl](https://curl.haxx.se/libcurl/), then parses the listing pages using
[Gumbo](https://github.com/google/gumbo-parser), and presents them using
For the normal HTTP directories, this program downloads the HTML web pages/files
using [libcurl](https://curl.haxx.se/libcurl/), then parses the listing pages
using [Gumbo](https://github.com/google/gumbo-parser), and presents them using
[libfuse](https://github.com/libfuse/libfuse).
I wrote the cache system myself. It was a Herculean effort. I am immensely proud
of it. The cache system stores the metadata and the downloaded file into two
separate directories. It uses bitmaps to record which segment of the file has
been downloaded. By bitmap, I meant ``uint8_t`` arrays, which each byte
indicating for a 1 MiB segment. I could not be bothered to implement proper
bitmapping. The main challenge for the cache system was hunting down various
race conditions which caused metadata corruption, downloading the same segment
multiple times, and deadlocks.
For *sonic servers, rather than using the Gumbo parser, this program parse
*sonic servers' XML responses using
[expat](https://github.com/libexpat/libexpat).
The cache system stores the metadata and the downloaded file into two
separate directories. It uses ``uint8_t`` arrays to record which segments of the
file had been downloaded.
Note that HTTPDirFS requires the server to support HTTP Range Request, some
servers support this features, but does not present ``"Accept-Ranges: bytes`` in
the header responses. HTTPDirFS by default checks for this header field. You can
disable this check by using the ``--no-range-check`` flag.
## Other projects which incorporate HTTPDirFS
- [Curious Container](https://www.curious-containers.cc/docs/red-connector-http#mount-dir)
has a Python wrapper for mounting HTTPDirFS.
## Press Coverage
- Linux Format - Issue [264](https://www.linuxformat.com/archives?issue=264), July 2020
## Acknowledgement
- First of all, I would like to thank
[Jerome Charaoui](https://github.com/jcharaoui) for being the Debian Maintainer
@ -158,6 +303,12 @@ for this piece of software. Thank you so much for packaging it!
- I would like to thank
[Cosmin Gorgovan](https://scholar.google.co.uk/citations?user=S7UZ6MAAAAAJ&hl=en)
for the technical and moral support. Your wisdom is much appreciated!
- I would like to thank [Edenist](https://github.com/edenist) for providing FreeBSD
compatibility patches.
- I would like to thank [hiliev](https://github.com/hiliev) for providing macOS
compatibility patches.
- I would like to thank [Jonathan Kamens](https://github.com/jikamens) for providing
a whole bunch of code improvements and the improved build system.
- I would like to thank [-Archivist](https://www.reddit.com/user/-Archivist/)
for not providing FTP or WebDAV access to his server. This piece of software was
written in direct response to his appalling behaviour.

1548
aclocal.m4 vendored Normal file

File diff suppressed because it is too large Load Diff

343
compile Executable file
View File

@ -0,0 +1,343 @@
#! /bin/sh
# Wrapper for compilers which do not understand '-c -o'.
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1999-2021 Free Software Foundation, Inc.
# Written by Tom Tromey <tromey@cygnus.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to <bug-automake@gnu.org> or send patches to
# <automake-patches@gnu.org>.
nl='
'
# We need space, tab and new line, in precisely that order. Quoting is
# there to prevent tools from complaining about whitespace usage.
IFS=" "" $nl"
file_conv=
# func_file_conv build_file lazy
# Convert a $build file to $host form and store it in $file
# Currently only supports Windows hosts. If the determined conversion
# type is listed in (the comma separated) LAZY, no conversion will
# take place.
func_file_conv ()
{
file=$1
case $file in
/ | /[!/]*) # absolute file, and not a UNC file
if test -z "$file_conv"; then
# lazily determine how to convert abs files
case `uname -s` in
MINGW*)
file_conv=mingw
;;
CYGWIN* | MSYS*)
file_conv=cygwin
;;
*)
file_conv=wine
;;
esac
fi
case $file_conv/,$2, in
*,$file_conv,*)
;;
mingw/*)
file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'`
;;
cygwin/* | msys/*)
file=`cygpath -m "$file" || echo "$file"`
;;
wine/*)
file=`winepath -w "$file" || echo "$file"`
;;
esac
;;
esac
}
# func_cl_dashL linkdir
# Make cl look for libraries in LINKDIR
func_cl_dashL ()
{
func_file_conv "$1"
if test -z "$lib_path"; then
lib_path=$file
else
lib_path="$lib_path;$file"
fi
linker_opts="$linker_opts -LIBPATH:$file"
}
# func_cl_dashl library
# Do a library search-path lookup for cl
func_cl_dashl ()
{
lib=$1
found=no
save_IFS=$IFS
IFS=';'
for dir in $lib_path $LIB
do
IFS=$save_IFS
if $shared && test -f "$dir/$lib.dll.lib"; then
found=yes
lib=$dir/$lib.dll.lib
break
fi
if test -f "$dir/$lib.lib"; then
found=yes
lib=$dir/$lib.lib
break
fi
if test -f "$dir/lib$lib.a"; then
found=yes
lib=$dir/lib$lib.a
break
fi
done
IFS=$save_IFS
if test "$found" != yes; then
lib=$lib.lib
fi
}
# func_cl_wrapper cl arg...
# Adjust compile command to suit cl
func_cl_wrapper ()
{
# Assume a capable shell
lib_path=
shared=:
linker_opts=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
eat=1
case $2 in
*.o | *.[oO][bB][jJ])
func_file_conv "$2"
set x "$@" -Fo"$file"
shift
;;
*)
func_file_conv "$2"
set x "$@" -Fe"$file"
shift
;;
esac
;;
-I)
eat=1
func_file_conv "$2" mingw
set x "$@" -I"$file"
shift
;;
-I*)
func_file_conv "${1#-I}" mingw
set x "$@" -I"$file"
shift
;;
-l)
eat=1
func_cl_dashl "$2"
set x "$@" "$lib"
shift
;;
-l*)
func_cl_dashl "${1#-l}"
set x "$@" "$lib"
shift
;;
-L)
eat=1
func_cl_dashL "$2"
;;
-L*)
func_cl_dashL "${1#-L}"
;;
-static)
shared=false
;;
-Wl,*)
arg=${1#-Wl,}
save_ifs="$IFS"; IFS=','
for flag in $arg; do
IFS="$save_ifs"
linker_opts="$linker_opts $flag"
done
IFS="$save_ifs"
;;
-Xlinker)
eat=1
linker_opts="$linker_opts $2"
;;
-*)
set x "$@" "$1"
shift
;;
*.cc | *.CC | *.cxx | *.CXX | *.[cC]++)
func_file_conv "$1"
set x "$@" -Tp"$file"
shift
;;
*.c | *.cpp | *.CPP | *.lib | *.LIB | *.Lib | *.OBJ | *.obj | *.[oO])
func_file_conv "$1" mingw
set x "$@" "$file"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -n "$linker_opts"; then
linker_opts="-link$linker_opts"
fi
exec "$@" $linker_opts
exit 1
}
eat=
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: compile [--help] [--version] PROGRAM [ARGS]
Wrapper for compilers which do not understand '-c -o'.
Remove '-o dest.o' from ARGS, run PROGRAM with the remaining
arguments, and rename the output as expected.
If you are trying to build a whole package this is not the
right script to run: please start by reading the file 'INSTALL'.
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "compile $scriptversion"
exit $?
;;
cl | *[/\\]cl | cl.exe | *[/\\]cl.exe | \
icl | *[/\\]icl | icl.exe | *[/\\]icl.exe )
func_cl_wrapper "$@" # Doesn't return...
;;
esac
ofile=
cfile=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
# So we strip '-o arg' only if arg is an object.
eat=1
case $2 in
*.o | *.obj)
ofile=$2
;;
*)
set x "$@" -o "$2"
shift
;;
esac
;;
*.c)
cfile=$1
set x "$@" "$1"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -z "$ofile" || test -z "$cfile"; then
# If no '-o' option was seen then we might have been invoked from a
# pattern rule where we don't need one. That is ok -- this is a
# normal compilation that the losing compiler can handle. If no
# '.c' file was seen then we are probably linking. That is also
# ok.
exec "$@"
fi
# Name of file we expect compiler to create.
cofile=`echo "$cfile" | sed 's|^.*[\\/]||; s|^[a-zA-Z]:||; s/\.c$/.o/'`
# Create the lock directory.
# Note: use '[/\\:.-]' here to ensure that we don't use the same name
# that we are using for the .o file. Also, base the name on the expected
# object file name, since that is what matters with a parallel build.
lockdir=`echo "$cofile" | sed -e 's|[/\\:.-]|_|g'`.d
while true; do
if mkdir "$lockdir" >/dev/null 2>&1; then
break
fi
sleep 1
done
# FIXME: race condition here if user kills between mkdir and trap.
trap "rmdir '$lockdir'; exit 1" 1 2 15
# Run the compile.
"$@"
ret=$?
if test -f "$cofile"; then
test "$cofile" = "$ofile" || mv "$cofile" "$ofile"
elif test -f "${cofile}bj"; then
test "${cofile}bj" = "$ofile" || mv "${cofile}bj" "$ofile"
fi
rmdir "$lockdir"
exit $ret
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# End:

1747
config.guess vendored Executable file

File diff suppressed because it is too large Load Diff

1883
config.sub vendored Executable file

File diff suppressed because it is too large Load Diff

5952
configure vendored Executable file

File diff suppressed because it is too large Load Diff

14
configure.ac Normal file
View File

@ -0,0 +1,14 @@
AC_INIT([httpdirfs],[1.2.5])
AC_CANONICAL_BUILD
AC_CONFIG_FILES([Makefile Doxyfile])
AC_PROG_CC
AC_SEARCH_LIBS([backtrace],[execinfo])
# Because we use $(fuse_LIBS) in $(CFLAGS); see comment in Makefile.in
AX_CHECK_COMPILE_FLAG([-Wunused-command-line-argument],[NUCLA=-Wno-unused-command-line-argument],[-Werror])
AC_SUBST([NUCLA])
AM_INIT_AUTOMAKE([foreign subdir-objects])
PKG_CHECK_MODULES([pkgconf],[gumbo libcurl uuid expat openssl])
# This is separate because we need to be able to use $(fuse_LIBS) in CFLAGS
PKG_CHECK_MODULES([fuse],[fuse])
AC_OUTPUT

786
depcomp Executable file
View File

@ -0,0 +1,786 @@
#! /bin/sh
# depcomp - compile a program generating dependencies as side-effects
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1999-2021 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# Originally written by Alexandre Oliva <oliva@dcc.unicamp.br>.
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: depcomp [--help] [--version] PROGRAM [ARGS]
Run PROGRAMS ARGS to compile a file, generating dependencies
as side-effects.
Environment variables:
depmode Dependency tracking mode.
source Source file read by 'PROGRAMS ARGS'.
object Object file output by 'PROGRAMS ARGS'.
DEPDIR directory where to store dependencies.
depfile Dependency file to output.
tmpdepfile Temporary file to use when outputting dependencies.
libtool Whether libtool is used (yes/no).
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "depcomp $scriptversion"
exit $?
;;
esac
# Get the directory component of the given path, and save it in the
# global variables '$dir'. Note that this directory component will
# be either empty or ending with a '/' character. This is deliberate.
set_dir_from ()
{
case $1 in
*/*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;;
*) dir=;;
esac
}
# Get the suffix-stripped basename of the given path, and save it the
# global variable '$base'.
set_base_from ()
{
base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'`
}
# If no dependency file was actually created by the compiler invocation,
# we still have to create a dummy depfile, to avoid errors with the
# Makefile "include basename.Plo" scheme.
make_dummy_depfile ()
{
echo "#dummy" > "$depfile"
}
# Factor out some common post-processing of the generated depfile.
# Requires the auxiliary global variable '$tmpdepfile' to be set.
aix_post_process_depfile ()
{
# If the compiler actually managed to produce a dependency file,
# post-process it.
if test -f "$tmpdepfile"; then
# Each line is of the form 'foo.o: dependency.h'.
# Do two passes, one to just change these to
# $object: dependency.h
# and one to simply output
# dependency.h:
# which is needed to avoid the deleted-header problem.
{ sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile"
sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile"
} > "$depfile"
rm -f "$tmpdepfile"
else
make_dummy_depfile
fi
}
# A tabulation character.
tab=' '
# A newline character.
nl='
'
# Character ranges might be problematic outside the C locale.
# These definitions help.
upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ
lower=abcdefghijklmnopqrstuvwxyz
digits=0123456789
alpha=${upper}${lower}
if test -z "$depmode" || test -z "$source" || test -z "$object"; then
echo "depcomp: Variables source, object and depmode must be set" 1>&2
exit 1
fi
# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
depfile=${depfile-`echo "$object" |
sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
rm -f "$tmpdepfile"
# Avoid interferences from the environment.
gccflag= dashmflag=
# Some modes work just like other modes, but use different flags. We
# parameterize here, but still list the modes in the big case below,
# to make depend.m4 easier to write. Note that we *cannot* use a case
# here, because this file can only contain one case statement.
if test "$depmode" = hp; then
# HP compiler uses -M and no extra arg.
gccflag=-M
depmode=gcc
fi
if test "$depmode" = dashXmstdout; then
# This is just like dashmstdout with a different argument.
dashmflag=-xM
depmode=dashmstdout
fi
cygpath_u="cygpath -u -f -"
if test "$depmode" = msvcmsys; then
# This is just like msvisualcpp but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvisualcpp
fi
if test "$depmode" = msvc7msys; then
# This is just like msvc7 but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvc7
fi
if test "$depmode" = xlc; then
# IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information.
gccflag=-qmakedep=gcc,-MF
depmode=gcc
fi
case "$depmode" in
gcc3)
## gcc 3 implements dependency tracking that does exactly what
## we want. Yay! Note: for some reason libtool 1.4 doesn't like
## it if -MD -MP comes after the -MF stuff. Hmm.
## Unfortunately, FreeBSD c89 acceptance of flags depends upon
## the command line argument order; so add the flags where they
## appear in depend2.am. Note that the slowdown incurred here
## affects only configure: in makefiles, %FASTDEP% shortcuts this.
for arg
do
case $arg in
-c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;;
*) set fnord "$@" "$arg" ;;
esac
shift # fnord
shift # $arg
done
"$@"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
mv "$tmpdepfile" "$depfile"
;;
gcc)
## Note that this doesn't just cater to obsosete pre-3.x GCC compilers.
## but also to in-use compilers like IMB xlc/xlC and the HP C compiler.
## (see the conditional assignment to $gccflag above).
## There are various ways to get dependency output from gcc. Here's
## why we pick this rather obscure method:
## - Don't want to use -MD because we'd like the dependencies to end
## up in a subdir. Having to rename by hand is ugly.
## (We might end up doing this anyway to support other compilers.)
## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
## -MM, not -M (despite what the docs say). Also, it might not be
## supported by the other compilers which use the 'gcc' depmode.
## - Using -M directly means running the compiler twice (even worse
## than renaming).
if test -z "$gccflag"; then
gccflag=-MD,
fi
"$@" -Wp,"$gccflag$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The second -e expression handles DOS-style file names with drive
# letters.
sed -e 's/^[^:]*: / /' \
-e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
## This next piece of magic avoids the "deleted header file" problem.
## The problem is that when a header file which appears in a .P file
## is deleted, the dependency causes make to die (because there is
## typically no way to rebuild the header). We avoid this by adding
## dummy dependencies for each header file. Too bad gcc doesn't do
## this for us directly.
## Some versions of gcc put a space before the ':'. On the theory
## that the space means something, we add a space to the output as
## well. hp depmode also adds that space, but also prefixes the VPATH
## to the object. Take care to not repeat it in the output.
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
sgi)
if test "$libtool" = yes; then
"$@" "-Wp,-MDupdate,$tmpdepfile"
else
"$@" -MDupdate "$tmpdepfile"
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
echo "$object : \\" > "$depfile"
# Clip off the initial element (the dependent). Don't try to be
# clever and replace this with sed code, as IRIX sed won't handle
# lines with more than a fixed number of characters (4096 in
# IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
# the IRIX cc adds comments like '#:fec' to the end of the
# dependency line.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \
| tr "$nl" ' ' >> "$depfile"
echo >> "$depfile"
# The second pass generates a dummy entry for each header file.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
>> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile"
;;
xlc)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
aix)
# The C for AIX Compiler uses -M and outputs the dependencies
# in a .u file. In older versions, this file always lives in the
# current directory. Also, the AIX compiler puts '$object:' at the
# start of each line; $object doesn't have directory information.
# Version 6 uses the directory in both cases.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.u
tmpdepfile2=$base.u
tmpdepfile3=$dir.libs/$base.u
"$@" -Wc,-M
else
tmpdepfile1=$dir$base.u
tmpdepfile2=$dir$base.u
tmpdepfile3=$dir$base.u
"$@" -M
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
aix_post_process_depfile
;;
tcc)
# tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26
# FIXME: That version still under development at the moment of writing.
# Make that this statement remains true also for stable, released
# versions.
# It will wrap lines (doesn't matter whether long or short) with a
# trailing '\', as in:
#
# foo.o : \
# foo.c \
# foo.h \
#
# It will put a trailing '\' even on the last line, and will use leading
# spaces rather than leading tabs (at least since its commit 0394caf7
# "Emit spaces for -MD").
"$@" -MD -MF "$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each non-empty line is of the form 'foo.o : \' or ' dep.h \'.
# We have to change lines of the first kind to '$object: \'.
sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile"
# And for each line of the second kind, we have to emit a 'dep.h:'
# dummy dependency, to avoid the deleted-header problem.
sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile"
rm -f "$tmpdepfile"
;;
## The order of this option in the case statement is important, since the
## shell code in configure will try each of these formats in the order
## listed in this file. A plain '-MD' option would be understood by many
## compilers, so we must ensure this comes after the gcc and icc options.
pgcc)
# Portland's C compiler understands '-MD'.
# Will always output deps to 'file.d' where file is the root name of the
# source file under compilation, even if file resides in a subdirectory.
# The object file name does not affect the name of the '.d' file.
# pgcc 10.2 will output
# foo.o: sub/foo.c sub/foo.h
# and will wrap long lines using '\' :
# foo.o: sub/foo.c ... \
# sub/foo.h ... \
# ...
set_dir_from "$object"
# Use the source, not the object, to determine the base name, since
# that's sadly what pgcc will do too.
set_base_from "$source"
tmpdepfile=$base.d
# For projects that build the same source file twice into different object
# files, the pgcc approach of using the *source* file root name can cause
# problems in parallel builds. Use a locking strategy to avoid stomping on
# the same $tmpdepfile.
lockdir=$base.d-lock
trap "
echo '$0: caught signal, cleaning up...' >&2
rmdir '$lockdir'
exit 1
" 1 2 13 15
numtries=100
i=$numtries
while test $i -gt 0; do
# mkdir is a portable test-and-set.
if mkdir "$lockdir" 2>/dev/null; then
# This process acquired the lock.
"$@" -MD
stat=$?
# Release the lock.
rmdir "$lockdir"
break
else
# If the lock is being held by a different process, wait
# until the winning process is done or we timeout.
while test -d "$lockdir" && test $i -gt 0; do
sleep 1
i=`expr $i - 1`
done
fi
i=`expr $i - 1`
done
trap - 1 2 13 15
if test $i -le 0; then
echo "$0: failed to acquire lock after $numtries attempts" >&2
echo "$0: check lockdir '$lockdir'" >&2
exit 1
fi
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each line is of the form `foo.o: dependent.h',
# or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp2)
# The "hp" stanza above does not work with aCC (C++) and HP's ia64
# compilers, which have integrated preprocessors. The correct option
# to use with these is +Maked; it writes dependencies to a file named
# 'foo.d', which lands next to the object file, wherever that
# happens to be.
# Much of this is similar to the tru64 case; see comments there.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir.libs/$base.d
"$@" -Wc,+Maked
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
"$@" +Maked
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2"
do
test -f "$tmpdepfile" && break
done
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile"
# Add 'dependent.h:' lines.
sed -ne '2,${
s/^ *//
s/ \\*$//
s/$/:/
p
}' "$tmpdepfile" >> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile" "$tmpdepfile2"
;;
tru64)
# The Tru64 compiler uses -MD to generate dependencies as a side
# effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'.
# At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
# dependencies in 'foo.d' instead, so we check for that too.
# Subdirectories are respected.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
# Libtool generates 2 separate objects for the 2 libraries. These
# two compilations output dependencies in $dir.libs/$base.o.d and
# in $dir$base.o.d. We have to check for both files, because
# one of the two compilations can be disabled. We should prefer
# $dir$base.o.d over $dir.libs/$base.o.d because the latter is
# automatically cleaned when .libs/ is deleted, while ignoring
# the former would cause a distcleancheck panic.
tmpdepfile1=$dir$base.o.d # libtool 1.5
tmpdepfile2=$dir.libs/$base.o.d # Likewise.
tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504
"$@" -Wc,-MD
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
tmpdepfile3=$dir$base.d
"$@" -MD
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
# Same post-processing that is required for AIX mode.
aix_post_process_depfile
;;
msvc7)
if test "$libtool" = yes; then
showIncludes=-Wc,-showIncludes
else
showIncludes=-showIncludes
fi
"$@" $showIncludes > "$tmpdepfile"
stat=$?
grep -v '^Note: including file: ' "$tmpdepfile"
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The first sed program below extracts the file names and escapes
# backslashes for cygpath. The second sed program outputs the file
# name when reading, but also accumulates all include files in the
# hold buffer in order to output them again at the end. This only
# works with sed implementations that can handle large buffers.
sed < "$tmpdepfile" -n '
/^Note: including file: *\(.*\)/ {
s//\1/
s/\\/\\\\/g
p
}' | $cygpath_u | sort -u | sed -n '
s/ /\\ /g
s/\(.*\)/'"$tab"'\1 \\/p
s/.\(.*\) \\/\1:/
H
$ {
s/.*/'"$tab"'/
G
p
}' >> "$depfile"
echo >> "$depfile" # make sure the fragment doesn't end with a backslash
rm -f "$tmpdepfile"
;;
msvc7msys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
#nosideeffect)
# This comment above is used by automake to tell side-effect
# dependency tracking mechanisms from slower ones.
dashmstdout)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
test -z "$dashmflag" && dashmflag=-M
# Require at least two characters before searching for ':'
# in the target name. This is to cope with DOS-style filenames:
# a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise.
"$@" $dashmflag |
sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this sed invocation
# correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
dashXmstdout)
# This case only exists to satisfy depend.m4. It is never actually
# run, as this mode is specially recognized in the preamble.
exit 1
;;
makedepend)
"$@" || exit $?
# Remove any Libtool call
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# X makedepend
shift
cleared=no eat=no
for arg
do
case $cleared in
no)
set ""; shift
cleared=yes ;;
esac
if test $eat = yes; then
eat=no
continue
fi
case "$arg" in
-D*|-I*)
set fnord "$@" "$arg"; shift ;;
# Strip any option that makedepend may not understand. Remove
# the object too, otherwise makedepend will parse it as a source file.
-arch)
eat=yes ;;
-*|$object)
;;
*)
set fnord "$@" "$arg"; shift ;;
esac
done
obj_suffix=`echo "$object" | sed 's/^.*\././'`
touch "$tmpdepfile"
${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
rm -f "$depfile"
# makedepend may prepend the VPATH from the source file name to the object.
# No need to regex-escape $object, excess matching of '.' is harmless.
sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process the last invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed '1,2d' "$tmpdepfile" \
| tr ' ' "$nl" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile" "$tmpdepfile".bak
;;
cpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
"$@" -E \
| sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
-e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
| sed '$ s: \\$::' > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
cat < "$tmpdepfile" >> "$depfile"
sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvisualcpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
IFS=" "
for arg
do
case "$arg" in
-o)
shift
;;
$object)
shift
;;
"-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
set fnord "$@"
shift
shift
;;
*)
set fnord "$@" "$arg"
shift
shift
;;
esac
done
"$@" -E 2>/dev/null |
sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile"
echo "$tab" >> "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvcmsys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
none)
exec "$@"
;;
*)
echo "Unknown depmode $depmode" 1>&2
exit 1
;;
esac
exit 0
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# End:

View File

@ -1,226 +0,0 @@
.TH HTTPDIRFS "1" "August 2019" "HTTPDirFS version 1.1.7" "User Commands"
.SH NAME
HTTPDirFS \- filesystem client for HTTP directory listing
.SH SYNOPSIS
.B httpdirfs
[\fI\,options\/\fR] \fI\,URL mountpoint\/\fR
.SH DESCRIPTION
HTTPDirFS is program that can be used to mount HTTP directory listings
(generated using an Apache DirectoryIndex, for example) as a virtual filesystem
through the FUSE interface. It supports HTTP basic authentication and proxy.
.Sh OPTIONS
.SS "General options:"
.TP
\fB\-o\fR opt,[opt...]
mount options
.TP
\fB\-h\fR \fB\-\-help\fR
print help
.TP
\fB\-V\fR \fB\-\-version\fR
print version
.SS "HTTPDirFS options:"
.TP
\fB\-u\fR \fB\-\-username\fR
HTTP authentication username
.TP
\fB\-p\fR \fB\-\-password\fR
HTTP authentication password
.TP
\fB\-P\fR \fB\-\-proxy\fR
Proxy for libcurl, for more details refer to
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html
.TP
\fB\-\-proxy\-username\fR
Username for the proxy
.TP
\fB\-\-proxy\-password\fR
Password for the proxy
.TP
\fB\-\-cache\fR
Enable cache (default: off)
.TP
\fB\-\-cache\-location\fR
Set a custom cache location
(default: "${XDG_CACHE_HOME}/httpdirfs")
.TP
\fB\-\-dl\-seg\-size\fR
Set cache download segment size, in MB (default: 8)
Note: this setting is ignored if previously
cached data is found for the requested file.
.TP
\fB\-\-max\-seg\-count\fR
Set maximum number of download segments a file
can have. (default: 128*1024)
With the default setting, the maximum memory usage
per file is 128KB. This allows caching files up
to 1TB in size using the default segment size.
.TP
\fB\-\-max\-conns\fR
Set maximum number of network connections that
libcurl is allowed to make. (default: 10)
.TP
\fB\-\-retry\-wait\fR
Set delay in seconds before retrying an HTTP request
after encountering an error. (default: 5)
.TP
\fB\-\-user\-agent\fR
Set user agent string (default: "HTTPDirFS")
.SS "FUSE options:"
.TP
\fB\-d\fR \fB\-o\fR debug
enable debug output (implies \fB\-f\fR)
.TP
\fB\-f\fR
foreground operation
.TP
\fB\-s\fR
disable multi\-threaded operation
.TP
\fB\-o\fR allow_other
allow access to other users
.TP
\fB\-o\fR allow_root
allow access to root
.TP
\fB\-o\fR auto_unmount
auto unmount on process termination
.TP
\fB\-o\fR nonempty
allow mounts over non\-empty file/dir
.HP
\fB\-o\fR default_permissions enable permission checking by kernel
.TP
\fB\-o\fR fsname=NAME
set filesystem name
.TP
\fB\-o\fR subtype=NAME
set filesystem type
.TP
\fB\-o\fR large_read
issue large read requests (2.4 only)
.TP
\fB\-o\fR max_read=N
set maximum size of read requests
.TP
\fB\-o\fR hard_remove
immediate removal (don't hide files)
.TP
\fB\-o\fR use_ino
let filesystem set inode numbers
.TP
\fB\-o\fR readdir_ino
try to fill in d_ino in readdir
.TP
\fB\-o\fR direct_io
use direct I/O
.TP
\fB\-o\fR kernel_cache
cache files in kernel
.TP
\fB\-o\fR [no]auto_cache
enable caching based on modification times (off)
.TP
\fB\-o\fR umask=M
set file permissions (octal)
.TP
\fB\-o\fR uid=N
set file owner
.TP
\fB\-o\fR gid=N
set file group
.TP
\fB\-o\fR entry_timeout=T
cache timeout for names (1.0s)
.TP
\fB\-o\fR negative_timeout=T
cache timeout for deleted names (0.0s)
.TP
\fB\-o\fR attr_timeout=T
cache timeout for attributes (1.0s)
.TP
\fB\-o\fR ac_attr_timeout=T
auto cache timeout for attributes (attr_timeout)
.TP
\fB\-o\fR noforget
never forget cached inodes
.TP
\fB\-o\fR remember=T
remember cached inodes for T seconds (0s)
.TP
\fB\-o\fR nopath
don't supply path if not necessary
.TP
\fB\-o\fR intr
allow requests to be interrupted
.TP
\fB\-o\fR intr_signal=NUM
signal to send on interrupt (10)
.TP
\fB\-o\fR modules=M1[:M2...]
names of modules to push onto filesystem stack
.TP
\fB\-o\fR max_write=N
set maximum size of write requests
.TP
\fB\-o\fR max_readahead=N
set maximum readahead
.TP
\fB\-o\fR max_background=N
set number of maximum background requests
.TP
\fB\-o\fR congestion_threshold=N
set kernel's congestion threshold
.TP
\fB\-o\fR async_read
perform reads asynchronously (default)
.TP
\fB\-o\fR sync_read
perform reads synchronously
.TP
\fB\-o\fR atomic_o_trunc
enable atomic open+truncate support
.TP
\fB\-o\fR big_writes
enable larger than 4kB writes
.TP
\fB\-o\fR no_remote_lock
disable remote file locking
.TP
\fB\-o\fR no_remote_flock
disable remote file locking (BSD)
.HP
\fB\-o\fR no_remote_posix_lock disable remove file locking (POSIX)
.TP
\fB\-o\fR [no_]splice_write
use splice to write to the fuse device
.TP
\fB\-o\fR [no_]splice_move
move data while splicing to the fuse device
.TP
\fB\-o\fR [no_]splice_read
use splice to read from the fuse device
.PP
Module options:
.PP
[iconv]
.TP
\fB\-o\fR from_code=CHARSET
original encoding of file names (default: UTF\-8)
.TP
\fB\-o\fR to_code=CHARSET
new encoding of the file names (default: ANSI_X3.4\-1968)
.PP
[subdir]
.TP
\fB\-o\fR subdir=DIR
prepend this directory to all paths (mandatory)
.TP
\fB\-o\fR [no]rellinks
transform absolute symlinks to relative
.SH AUTHORS
.LP
HTTPDirFS has been written by Fufu Fang <fangfufu2003@gmail.com>.
.LP
This manpage was written by Jerome Charaoui <jerome@riseup.net> for the
Debian GNU/Linux distribution (but it may be used by others).

533
install-sh Executable file
View File

@ -0,0 +1,533 @@
#!/bin/sh
# install - install a program, script, or datafile
scriptversion=2020-11-14.01; # UTC
# This originates from X11R5 (mit/util/scripts/install.sh), which was
# later released in X11R6 (xc/config/util/install.sh) with the
# following copyright and license.
#
# Copyright (C) 1994 X Consortium
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of the X Consortium shall not
# be used in advertising or otherwise to promote the sale, use or other deal-
# ings in this Software without prior written authorization from the X Consor-
# tium.
#
#
# FSF changes to this file are in the public domain.
#
# Calling this script install-sh is preferred over install.sh, to prevent
# 'make' implicit rules from creating a file called install from it
# when there is no Makefile.
#
# This script is compatible with the BSD install script, but was written
# from scratch.
tab=' '
nl='
'
IFS=" $tab$nl"
# Set DOITPROG to "echo" to test this script.
doit=${DOITPROG-}
doit_exec=${doit:-exec}
# Put in absolute file names if you don't have them in your path;
# or use environment vars.
chgrpprog=${CHGRPPROG-chgrp}
chmodprog=${CHMODPROG-chmod}
chownprog=${CHOWNPROG-chown}
cmpprog=${CMPPROG-cmp}
cpprog=${CPPROG-cp}
mkdirprog=${MKDIRPROG-mkdir}
mvprog=${MVPROG-mv}
rmprog=${RMPROG-rm}
stripprog=${STRIPPROG-strip}
posix_mkdir=
# Desired mode of installed file.
mode=0755
# Create dirs (including intermediate dirs) using mode 755.
# This is like GNU 'install' as of coreutils 8.32 (2020).
mkdir_umask=22
backupsuffix=
chgrpcmd=
chmodcmd=$chmodprog
chowncmd=
mvcmd=$mvprog
rmcmd="$rmprog -f"
stripcmd=
src=
dst=
dir_arg=
dst_arg=
copy_on_change=false
is_target_a_directory=possibly
usage="\
Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
or: $0 [OPTION]... SRCFILES... DIRECTORY
or: $0 [OPTION]... -t DIRECTORY SRCFILES...
or: $0 [OPTION]... -d DIRECTORIES...
In the 1st form, copy SRCFILE to DSTFILE.
In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
In the 4th, create DIRECTORIES.
Options:
--help display this help and exit.
--version display version info and exit.
-c (ignored)
-C install only if different (preserve data modification time)
-d create directories instead of installing files.
-g GROUP $chgrpprog installed files to GROUP.
-m MODE $chmodprog installed files to MODE.
-o USER $chownprog installed files to USER.
-p pass -p to $cpprog.
-s $stripprog installed files.
-S SUFFIX attempt to back up existing files, with suffix SUFFIX.
-t DIRECTORY install into DIRECTORY.
-T report an error if DSTFILE is a directory.
Environment variables override the default commands:
CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG
RMPROG STRIPPROG
By default, rm is invoked with -f; when overridden with RMPROG,
it's up to you to specify -f if you want it.
If -S is not specified, no backups are attempted.
Email bug reports to bug-automake@gnu.org.
Automake home page: https://www.gnu.org/software/automake/
"
while test $# -ne 0; do
case $1 in
-c) ;;
-C) copy_on_change=true;;
-d) dir_arg=true;;
-g) chgrpcmd="$chgrpprog $2"
shift;;
--help) echo "$usage"; exit $?;;
-m) mode=$2
case $mode in
*' '* | *"$tab"* | *"$nl"* | *'*'* | *'?'* | *'['*)
echo "$0: invalid mode: $mode" >&2
exit 1;;
esac
shift;;
-o) chowncmd="$chownprog $2"
shift;;
-p) cpprog="$cpprog -p";;
-s) stripcmd=$stripprog;;
-S) backupsuffix="$2"
shift;;
-t)
is_target_a_directory=always
dst_arg=$2
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
shift;;
-T) is_target_a_directory=never;;
--version) echo "$0 $scriptversion"; exit $?;;
--) shift
break;;
-*) echo "$0: invalid option: $1" >&2
exit 1;;
*) break;;
esac
shift
done
# We allow the use of options -d and -T together, by making -d
# take the precedence; this is for compatibility with GNU install.
if test -n "$dir_arg"; then
if test -n "$dst_arg"; then
echo "$0: target directory not allowed when installing a directory." >&2
exit 1
fi
fi
if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then
# When -d is used, all remaining arguments are directories to create.
# When -t is used, the destination is already specified.
# Otherwise, the last argument is the destination. Remove it from $@.
for arg
do
if test -n "$dst_arg"; then
# $@ is not empty: it contains at least $arg.
set fnord "$@" "$dst_arg"
shift # fnord
fi
shift # arg
dst_arg=$arg
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
done
fi
if test $# -eq 0; then
if test -z "$dir_arg"; then
echo "$0: no input file specified." >&2
exit 1
fi
# It's OK to call 'install-sh -d' without argument.
# This can happen when creating conditional directories.
exit 0
fi
if test -z "$dir_arg"; then
if test $# -gt 1 || test "$is_target_a_directory" = always; then
if test ! -d "$dst_arg"; then
echo "$0: $dst_arg: Is not a directory." >&2
exit 1
fi
fi
fi
if test -z "$dir_arg"; then
do_exit='(exit $ret); exit $ret'
trap "ret=129; $do_exit" 1
trap "ret=130; $do_exit" 2
trap "ret=141; $do_exit" 13
trap "ret=143; $do_exit" 15
# Set umask so as not to create temps with too-generous modes.
# However, 'strip' requires both read and write access to temps.
case $mode in
# Optimize common cases.
*644) cp_umask=133;;
*755) cp_umask=22;;
*[0-7])
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw='% 200'
fi
cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;;
*)
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw=,u+rw
fi
cp_umask=$mode$u_plus_rw;;
esac
fi
for src
do
# Protect names problematic for 'test' and other utilities.
case $src in
-* | [=\(\)!]) src=./$src;;
esac
if test -n "$dir_arg"; then
dst=$src
dstdir=$dst
test -d "$dstdir"
dstdir_status=$?
# Don't chown directories that already exist.
if test $dstdir_status = 0; then
chowncmd=""
fi
else
# Waiting for this to be detected by the "$cpprog $src $dsttmp" command
# might cause directories to be created, which would be especially bad
# if $src (and thus $dsttmp) contains '*'.
if test ! -f "$src" && test ! -d "$src"; then
echo "$0: $src does not exist." >&2
exit 1
fi
if test -z "$dst_arg"; then
echo "$0: no destination specified." >&2
exit 1
fi
dst=$dst_arg
# If destination is a directory, append the input filename.
if test -d "$dst"; then
if test "$is_target_a_directory" = never; then
echo "$0: $dst_arg: Is a directory" >&2
exit 1
fi
dstdir=$dst
dstbase=`basename "$src"`
case $dst in
*/) dst=$dst$dstbase;;
*) dst=$dst/$dstbase;;
esac
dstdir_status=0
else
dstdir=`dirname "$dst"`
test -d "$dstdir"
dstdir_status=$?
fi
fi
case $dstdir in
*/) dstdirslash=$dstdir;;
*) dstdirslash=$dstdir/;;
esac
obsolete_mkdir_used=false
if test $dstdir_status != 0; then
case $posix_mkdir in
'')
# With -d, create the new directory with the user-specified mode.
# Otherwise, rely on $mkdir_umask.
if test -n "$dir_arg"; then
mkdir_mode=-m$mode
else
mkdir_mode=
fi
posix_mkdir=false
# The $RANDOM variable is not portable (e.g., dash). Use it
# here however when possible just to lower collision chance.
tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$
trap '
ret=$?
rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" 2>/dev/null
exit $ret
' 0
# Because "mkdir -p" follows existing symlinks and we likely work
# directly in world-writeable /tmp, make sure that the '$tmpdir'
# directory is successfully created first before we actually test
# 'mkdir -p'.
if (umask $mkdir_umask &&
$mkdirprog $mkdir_mode "$tmpdir" &&
exec $mkdirprog $mkdir_mode -p -- "$tmpdir/a/b") >/dev/null 2>&1
then
if test -z "$dir_arg" || {
# Check for POSIX incompatibilities with -m.
# HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or
# other-writable bit of parent directory when it shouldn't.
# FreeBSD 6.1 mkdir -m -p sets mode of existing directory.
test_tmpdir="$tmpdir/a"
ls_ld_tmpdir=`ls -ld "$test_tmpdir"`
case $ls_ld_tmpdir in
d????-?r-*) different_mode=700;;
d????-?--*) different_mode=755;;
*) false;;
esac &&
$mkdirprog -m$different_mode -p -- "$test_tmpdir" && {
ls_ld_tmpdir_1=`ls -ld "$test_tmpdir"`
test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1"
}
}
then posix_mkdir=:
fi
rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir"
else
# Remove any dirs left behind by ancient mkdir implementations.
rmdir ./$mkdir_mode ./-p ./-- "$tmpdir" 2>/dev/null
fi
trap '' 0;;
esac
if
$posix_mkdir && (
umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir"
)
then :
else
# mkdir does not conform to POSIX,
# or it failed possibly due to a race condition. Create the
# directory the slow way, step by step, checking for races as we go.
case $dstdir in
/*) prefix='/';;
[-=\(\)!]*) prefix='./';;
*) prefix='';;
esac
oIFS=$IFS
IFS=/
set -f
set fnord $dstdir
shift
set +f
IFS=$oIFS
prefixes=
for d
do
test X"$d" = X && continue
prefix=$prefix$d
if test -d "$prefix"; then
prefixes=
else
if $posix_mkdir; then
(umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break
# Don't fail if two instances are running concurrently.
test -d "$prefix" || exit 1
else
case $prefix in
*\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;;
*) qprefix=$prefix;;
esac
prefixes="$prefixes '$qprefix'"
fi
fi
prefix=$prefix/
done
if test -n "$prefixes"; then
# Don't fail if two instances are running concurrently.
(umask $mkdir_umask &&
eval "\$doit_exec \$mkdirprog $prefixes") ||
test -d "$dstdir" || exit 1
obsolete_mkdir_used=true
fi
fi
fi
if test -n "$dir_arg"; then
{ test -z "$chowncmd" || $doit $chowncmd "$dst"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } &&
{ test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false ||
test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1
else
# Make a couple of temp file names in the proper directory.
dsttmp=${dstdirslash}_inst.$$_
rmtmp=${dstdirslash}_rm.$$_
# Trap to clean up those temp files at exit.
trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0
# Copy the file name to the temp name.
(umask $cp_umask &&
{ test -z "$stripcmd" || {
# Create $dsttmp read-write so that cp doesn't create it read-only,
# which would cause strip to fail.
if test -z "$doit"; then
: >"$dsttmp" # No need to fork-exec 'touch'.
else
$doit touch "$dsttmp"
fi
}
} &&
$doit_exec $cpprog "$src" "$dsttmp") &&
# and set any options; do chmod last to preserve setuid bits.
#
# If any of these fail, we abort the whole thing. If we want to
# ignore errors from any of these, just make sure not to ignore
# errors from the above "$doit $cpprog $src $dsttmp" command.
#
{ test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } &&
{ test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } &&
{ test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } &&
# If -C, don't bother to copy if it wouldn't change the file.
if $copy_on_change &&
old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` &&
new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` &&
set -f &&
set X $old && old=:$2:$4:$5:$6 &&
set X $new && new=:$2:$4:$5:$6 &&
set +f &&
test "$old" = "$new" &&
$cmpprog "$dst" "$dsttmp" >/dev/null 2>&1
then
rm -f "$dsttmp"
else
# If $backupsuffix is set, and the file being installed
# already exists, attempt a backup. Don't worry if it fails,
# e.g., if mv doesn't support -f.
if test -n "$backupsuffix" && test -f "$dst"; then
$doit $mvcmd -f "$dst" "$dst$backupsuffix" 2>/dev/null
fi
# Rename the file to the real destination.
$doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null ||
# The rename failed, perhaps because mv can't rename something else
# to itself, or perhaps because mv is so ancient that it does not
# support -f.
{
# Now remove or move aside any old file at destination location.
# We try this two ways since rm can't unlink itself on some
# systems and the destination file might be busy for other
# reasons. In this case, the final cleanup might fail but the new
# file should still install successfully.
{
test ! -f "$dst" ||
$doit $rmcmd "$dst" 2>/dev/null ||
{ $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null &&
{ $doit $rmcmd "$rmtmp" 2>/dev/null; :; }
} ||
{ echo "$0: cannot unlink or rename $dst" >&2
(exit 1); exit 1
}
} &&
# Now rename the file to the real destination.
$doit $mvcmd "$dsttmp" "$dst"
}
fi || exit 1
trap '' 0
fi
done

207
missing Executable file
View File

@ -0,0 +1,207 @@
#! /bin/sh
# Common wrapper for a few potentially missing GNU programs.
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1996-2021 Free Software Foundation, Inc.
# Originally written by Fran,cois Pinard <pinard@iro.umontreal.ca>, 1996.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
if test $# -eq 0; then
echo 1>&2 "Try '$0 --help' for more information"
exit 1
fi
case $1 in
--is-lightweight)
# Used by our autoconf macros to check whether the available missing
# script is modern enough.
exit 0
;;
--run)
# Back-compat with the calling convention used by older automake.
shift
;;
-h|--h|--he|--hel|--help)
echo "\
$0 [OPTION]... PROGRAM [ARGUMENT]...
Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due
to PROGRAM being missing or too old.
Options:
-h, --help display this help and exit
-v, --version output version information and exit
Supported PROGRAM values:
aclocal autoconf autoheader autom4te automake makeinfo
bison yacc flex lex help2man
Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and
'g' are ignored when checking the name.
Send bug reports to <bug-automake@gnu.org>."
exit $?
;;
-v|--v|--ve|--ver|--vers|--versi|--versio|--version)
echo "missing $scriptversion (GNU Automake)"
exit $?
;;
-*)
echo 1>&2 "$0: unknown '$1' option"
echo 1>&2 "Try '$0 --help' for more information"
exit 1
;;
esac
# Run the given program, remember its exit status.
"$@"; st=$?
# If it succeeded, we are done.
test $st -eq 0 && exit 0
# Also exit now if we it failed (or wasn't found), and '--version' was
# passed; such an option is passed most likely to detect whether the
# program is present and works.
case $2 in --version|--help) exit $st;; esac
# Exit code 63 means version mismatch. This often happens when the user
# tries to use an ancient version of a tool on a file that requires a
# minimum version.
if test $st -eq 63; then
msg="probably too old"
elif test $st -eq 127; then
# Program was missing.
msg="missing on your system"
else
# Program was found and executed, but failed. Give up.
exit $st
fi
perl_URL=https://www.perl.org/
flex_URL=https://github.com/westes/flex
gnu_software_URL=https://www.gnu.org/software
program_details ()
{
case $1 in
aclocal|automake)
echo "The '$1' program is part of the GNU Automake package:"
echo "<$gnu_software_URL/automake>"
echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/autoconf>"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
autoconf|autom4te|autoheader)
echo "The '$1' program is part of the GNU Autoconf package:"
echo "<$gnu_software_URL/autoconf/>"
echo "It also requires GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
esac
}
give_advice ()
{
# Normalize program name to check for.
normalized_program=`echo "$1" | sed '
s/^gnu-//; t
s/^gnu//; t
s/^g//; t'`
printf '%s\n' "'$1' is $msg."
configure_deps="'configure.ac' or m4 files included by 'configure.ac'"
case $normalized_program in
autoconf*)
echo "You should only need it if you modified 'configure.ac',"
echo "or m4 files included by it."
program_details 'autoconf'
;;
autoheader*)
echo "You should only need it if you modified 'acconfig.h' or"
echo "$configure_deps."
program_details 'autoheader'
;;
automake*)
echo "You should only need it if you modified 'Makefile.am' or"
echo "$configure_deps."
program_details 'automake'
;;
aclocal*)
echo "You should only need it if you modified 'acinclude.m4' or"
echo "$configure_deps."
program_details 'aclocal'
;;
autom4te*)
echo "You might have modified some maintainer files that require"
echo "the 'autom4te' program to be rebuilt."
program_details 'autom4te'
;;
bison*|yacc*)
echo "You should only need it if you modified a '.y' file."
echo "You may want to install the GNU Bison package:"
echo "<$gnu_software_URL/bison/>"
;;
lex*|flex*)
echo "You should only need it if you modified a '.l' file."
echo "You may want to install the Fast Lexical Analyzer package:"
echo "<$flex_URL>"
;;
help2man*)
echo "You should only need it if you modified a dependency" \
"of a man page."
echo "You may want to install the GNU Help2man package:"
echo "<$gnu_software_URL/help2man/>"
;;
makeinfo*)
echo "You should only need it if you modified a '.texi' file, or"
echo "any other file indirectly affecting the aspect of the manual."
echo "You might want to install the Texinfo package:"
echo "<$gnu_software_URL/texinfo/>"
echo "The spurious makeinfo call might also be the consequence of"
echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might"
echo "want to install GNU make:"
echo "<$gnu_software_URL/make/>"
;;
*)
echo "You might have modified some files without having the proper"
echo "tools for further handling them. Check the 'README' file, it"
echo "often tells you about the needed prerequisites for installing"
echo "this package. You may also peek at any GNU archive site, in"
echo "case some other package contains this missing '$1' program."
;;
esac
}
give_advice "$1" | sed -e '1s/^/WARNING: /' \
-e '2,$s/^/ /' >&2
# Propagate the correct exit status (expected to be 127 for a program
# not found, 63 for a program that failed due to version mismatch).
exit $st

6
src/README.md Normal file
View File

@ -0,0 +1,6 @@
## Coding Convention
- External variables: capital letters
- Static global variables: lower case letters
- Function names: TypeName_verb or verb_noun
- Type names: camel case with the first letter capitalised, e.g. CamelCase
- Indentation style: ``indent -kr -nut *.c *.h``

File diff suppressed because it is too large Load Diff

View File

@ -1,10 +1,6 @@
#ifndef CACHE_H
#define CACHE_H
#include "link.h"
#include <pthread.h>
/**
* \file cache.h
* \brief cache related structures and functions
@ -13,35 +9,67 @@
* separate folders.
*/
typedef struct Cache Cache;
#include "link.h"
#include "network.h"
#include <stdio.h>
#include <stdint.h>
#include <pthread.h>
/**
* \brief Type definition for a cache segment
*/
typedef uint8_t Seg;
/**
* \brief cache in-memory data structure
* \brief cache data type in-memory data structure
*/
typedef struct {
char *path; /**< the path to the file on the web server */
Link *link; /**< the Link associated with this cache data set */
long time; /**<the modified time of the file */
off_t content_length; /**<the size of the file */
struct Cache {
/** \brief How many times the cache has been opened */
int cache_opened;
pthread_t bgt; /**< background download pthread */
pthread_mutex_t bgt_lock; /**< mutex for the background download thread */
pthread_mutexattr_t bgt_lock_attr; /**< attributes for bgt_lock */
off_t next_offset; /**<the offset of the next segment to be
downloaded in background*/
/** \brief the FILE pointer for the data file*/
FILE *dfp;
/** \brief the FILE pointer for the metadata */
FILE *mfp;
/** \brief the path to the local cache file */
char *path;
/** \brief the Link associated with this cache data set */
Link *link;
/** \brief the modified time of the file */
long time;
/** \brief the size of the file */
off_t content_length;
/** \brief the block size of the data file */
int blksz;
/** \brief segment array byte count */
long segbc;
/** \brief the detail of each segment */
Seg *seg;
pthread_mutex_t rw_lock; /**< mutex for read/write operation */
pthread_mutexattr_t rw_lock_attr; /**< attributes for rw_lock */
/** \brief mutex lock for seek operation */
pthread_mutex_t seek_lock;
/** \brief mutex lock for write operation */
pthread_mutex_t w_lock;
FILE *dfp; /**< The FILE pointer for the data file*/
FILE *mfp; /**< The FILE pointer for the metadata */
int blksz; /**<the block size of the data file */
long segbc; /**<segment array byte count */
Seg *seg; /**< the detail of each segment */
} Cache;
/** \brief background download pthread */
pthread_t bgt;
/**
* \brief mutex lock for the background download thread
* \note This lock is locked by the foreground thread, but unlocked by the
* background thread!
*/
pthread_mutex_t bgt_lock;
/** \brief mutex attributes for bgt_lock */
pthread_mutexattr_t bgt_lock_attr;
/** \brief the offset of the next segment to be downloaded in background*/
off_t next_dl_offset;
/** \brief the FUSE filesystem path to the remote file*/
char *fs_path;
};
/**
* \brief whether the cache system is enabled
@ -49,14 +77,9 @@ typedef struct {
extern int CACHE_SYSTEM_INIT;
/**
* \brief The size of each download segment
* \brief The metadata directory
*/
extern int DATA_BLK_SZ;
/**
* \brief The maximum segment count for a single cache file
*/
extern int MAX_SEGBC;
extern char *META_DIR;
/**
* \brief initialise the cache system directories
@ -67,7 +90,7 @@ extern int MAX_SEGBC;
* If these directories do not exist, they will be created.
* \note Called by parse_arg_list(), verified to be working
*/
void CacheSystem_init(const char *path, int path_supplied);
void CacheSystem_init(const char *path, int url_supplied);
/**
* \brief Create directories under the cache directory structure, if they do
@ -95,11 +118,11 @@ void Cache_close(Cache *cf);
/**
* \brief create a cache file set if it doesn't exist already
* \return
* - 0, if the cache file already exists, or was created succesfully.
* - 0, if the cache file already exists, or was created successfully.
* - -1, otherwise
* \note Called by fs_open()
*/
int Cache_create(Link *this_link);
int Cache_create(const char *path);
/**
* \brief delete a cache file set
@ -114,10 +137,10 @@ void Cache_delete(const char *fn);
* \param[in] cf the cache in-memory data structure
* \param[out] output_buf the output buffer
* \param[in] len the requested segment size
* \param[in] offset the start of the segment
* \param[in] offset_start the start of the segment
* \return the length of the segment the cache system managed to obtain.
* \note Called by fs_read(), verified to be working
*/
long Cache_read(Cache *cf, char *output_buf, off_t len, off_t offset);
long Cache_read(Cache *cf, char *const output_buf, const off_t len,
const off_t offset_start);
#endif

75
src/config.c Normal file
View File

@ -0,0 +1,75 @@
#include "config.h"
#include "log.h"
#include <stddef.h>
/**
* \brief The default HTTP 429 (too many requests) wait time
*/
#define DEFAULT_HTTP_WAIT_SEC 5
/**
* \brief Data file block size
* \details We set it to 1024*1024*8 = 8MiB
*/
#define DEFAULT_DATA_BLKSZ 8*1024*1024
/**
* \brief Maximum segment block count
* \details This is set to 128*1024 blocks, which uses 128KB. By default,
* this allows the user to store (128*1024)*(8*1024*1024) = 1TB of data
*/
#define DEFAULT_MAX_SEGBC 128*1024
ConfigStruct CONFIG;
/**
* \note The opening curly bracket should be at line 39, so the code lines up
* with the definition code in util.h.
*/
void Config_init(void)
{
CONFIG.mode = NORMAL;
CONFIG.log_type = log_level_init();
/*---------------- Network related --------------*/
CONFIG.http_username = NULL;
CONFIG.http_password = NULL;
CONFIG.proxy = NULL;
CONFIG.proxy_username = NULL;
CONFIG.proxy_password = NULL;
CONFIG.max_conns = DEFAULT_NETWORK_MAX_CONNS;
CONFIG.user_agent = DEFAULT_USER_AGENT;
CONFIG.http_wait_sec = DEFAULT_HTTP_WAIT_SEC;
CONFIG.no_range_check = 0;
CONFIG.insecure_tls = 0;
CONFIG.refresh_timeout = DEFAULT_REFRESH_TIMEOUT;
/*--------------- Cache related ---------------*/
CONFIG.cache_enabled = 0;
CONFIG.cache_dir = NULL;
CONFIG.data_blksz = DEFAULT_DATA_BLKSZ;
CONFIG.max_segbc = DEFAULT_MAX_SEGBC;
/*-------------- Sonic related -------------*/
CONFIG.sonic_username = NULL;
CONFIG.sonic_password = NULL;
CONFIG.sonic_id3 = 0;
CONFIG.sonic_insecure = 0;
}

102
src/config.h Normal file
View File

@ -0,0 +1,102 @@
#ifndef CONFIG_H
#define CONFIG_H
/**
* \brief the maximum length of a path and a URL.
* \details This corresponds the maximum path length under Ext4.
*/
#define MAX_PATH_LEN 4096
/**
* \brief the maximum length of a filename.
* \details This corresponds the filename length under Ext4.
*/
#define MAX_FILENAME_LEN 255
/**
* \brief the default user agent string
*/
#define DEFAULT_USER_AGENT "HTTPDirFS-" VERSION
/**
* \brief The default maximum number of network connections
*/
#define DEFAULT_NETWORK_MAX_CONNS 10
/**
* \brief The default refresh_timeout
*/
#define DEFAULT_REFRESH_TIMEOUT 3600
/**
* \brief Operation modes
*/
typedef enum {
NORMAL = 1,
SONIC = 2,
SINGLE = 3,
} OperationMode;
/**
* \brief configuration data structure
* \note The opening curly bracket should be at line 39, so the code belong
* lines up with the initialisation code in util.c
*/
typedef struct {
/** \brief Operation Mode */
OperationMode mode;
/** \brief Current log level */
int log_type;
/*---------------- Network related --------------*/
/** \brief HTTP username */
char *http_username;
/** \brief HTTP password */
char *http_password;
/** \brief HTTP proxy URL */
char *proxy;
/** \brief HTTP proxy username */
char *proxy_username;
/** \brief HTTP proxy password */
char *proxy_password;
/** \brief HTTP proxy certificate file */
char *proxy_cafile;
/** \brief HTTP maximum connection count */
long max_conns;
/** \brief HTTP user agent*/
char *user_agent;
/** \brief The waiting time after getting HTTP 429 (too many requests) */
int http_wait_sec;
/** \brief Disable check for the server's support of HTTP range request */
int no_range_check;
/** \brief Disable TLS certificate verification */
int insecure_tls;
/** \brief Server certificate file */
char *cafile;
/** \brief Refresh directory listing after refresh_timeout seconds*/
int refresh_timeout;
/*--------------- Cache related ---------------*/
/** \brief Whether cache mode is enabled */
int cache_enabled;
/** \brief The cache location*/
char *cache_dir;
/** \brief The size of each download segment for cache mode */
int data_blksz;
/** \brief The maximum segment count for a single cache file */
int max_segbc;
/*-------------- Sonic related -------------*/
/** \brief The Sonic server username */
char *sonic_username;
/** \brief The Sonic server password */
char *sonic_password;
/** \brief Whether we are using sonic mode ID3 extension */
int sonic_id3;
/** \brief Whether we use the legacy sonic authentication mode */
int sonic_insecure;
} ConfigStruct;
/**
* \brief The Configuration data structure
*/
extern ConfigStruct CONFIG;
#endif

View File

@ -1,8 +1,11 @@
#include "fuse_local.h"
#include "cache.h"
#include "link.h"
#include "log.h"
/* must be included before including <fuse.h> */
/*
* must be included before including <fuse.h>
*/
#define FUSE_USE_VERSION 26
#include <fuse.h>
@ -19,14 +22,14 @@ static void *fs_init(struct fuse_conn_info *conn)
/** \brief release an opened file */
static int fs_release(const char *path, struct fuse_file_info *fi)
{
lprintf(info, "%s\n", path);
(void) path;
if (CACHE_SYSTEM_INIT) {
Cache_close((Cache *)fi->fh);
Cache_close((Cache *) fi->fh);
}
return 0;
}
/** \brief return the attributes for a single file indicated by path */
static int fs_getattr(const char *path, struct stat *stbuf)
{
@ -41,23 +44,27 @@ static int fs_getattr(const char *path, struct stat *stbuf)
if (!link) {
return -ENOENT;
}
struct timespec spec;
struct timespec spec = { 0 };
spec.tv_sec = link->time;
#if defined(__APPLE__) && defined(__MACH__)
stbuf->st_mtimespec = spec;
#else
stbuf->st_mtim = spec;
#endif
switch (link->type) {
case LINK_DIR:
stbuf->st_mode = S_IFDIR | 0755;
stbuf->st_nlink = 1;
break;
case LINK_FILE:
stbuf->st_mode = S_IFREG | 0444;
stbuf->st_nlink = 1;
stbuf->st_size = link->content_length;
stbuf->st_blksize = 128*1024;
stbuf->st_blocks = (link->content_length)/512;
break;
default:
return -ENOENT;
case LINK_DIR:
stbuf->st_mode = S_IFDIR | 0755;
stbuf->st_nlink = 1;
break;
case LINK_FILE:
stbuf->st_mode = S_IFREG | 0444;
stbuf->st_nlink = 1;
stbuf->st_size = link->content_length;
stbuf->st_blksize = 128 * 1024;
stbuf->st_blocks = (link->content_length) / 512;
break;
default:
return -ENOENT;
}
}
stbuf->st_uid = getuid();
@ -67,12 +74,13 @@ static int fs_getattr(const char *path, struct stat *stbuf)
}
/** \brief read a file */
static int fs_read(const char *path, char *buf, size_t size, off_t offset,
struct fuse_file_info *fi)
static int
fs_read(const char *path, char *buf, size_t size, off_t offset,
struct fuse_file_info *fi)
{
long received;
long received;
if (CACHE_SYSTEM_INIT) {
received = Cache_read((Cache *)fi->fh, buf, size, offset);
received = Cache_read((Cache *) fi->fh, buf, size, offset);
} else {
received = path_download(path, buf, size, offset);
}
@ -82,27 +90,34 @@ static int fs_read(const char *path, char *buf, size_t size, off_t offset,
/** \brief open a file indicated by the path */
static int fs_open(const char *path, struct fuse_file_info *fi)
{
lprintf(info, "%s\n", path);
Link *link = path_to_Link(path);
if (!link) {
return -ENOENT;
}
if ((fi->flags & 3) != O_RDONLY) {
return -EACCES;
lprintf(debug, "%s found.\n", path);
if ((fi->flags & O_RDWR) != O_RDONLY) {
return -EROFS;
}
if (CACHE_SYSTEM_INIT) {
lprintf(debug, "Cache_open(%s);\n", path);
fi->fh = (uint64_t) Cache_open(path);
if (!fi->fh) {
/*
* The link clearly exists, the cache cannot be opened, attempt
* cache creation
*/
lprintf(debug, "Cache_delete(%s);\n", path);
Cache_delete(path);
Cache_create(link);
lprintf(debug, "Cache_create(%s);\n", path);
Cache_create(path);
lprintf(debug, "Cache_open(%s);\n", path);
fi->fh = (uint64_t) Cache_open(path);
/*
* The cache definitely cannot be opened for some reason.
*/
if (!fi->fh) {
lprintf(fatal, "Cache file creation failure for %s.\n", path);
return -ENOENT;
}
}
@ -110,30 +125,45 @@ static int fs_open(const char *path, struct fuse_file_info *fi)
return 0;
}
/** \brief read the directory indicated by the path*/
static int fs_readdir(const char *path, void *buf, fuse_fill_dir_t dir_add,
off_t offset, struct fuse_file_info *fi)
/**
* \brief read the directory indicated by the path
* \note
* - releasedir() is not implemented, because I don't see why anybody want
* the LinkTables to be evicted from the memory during the runtime of this
* program. If you want to evict LinkTables, just unmount the filesystem.
* - There is no real need to associate the LinkTable with the fi of each
* directory data structure. If you want a deep level directory, you need to
* generate the LinkTables for previous level directories. We might
* as well maintain our own tree structure.
*/
static int
fs_readdir(const char *path, void *buf, fuse_fill_dir_t dir_add,
off_t offset, struct fuse_file_info *fi)
{
(void) offset;
(void) fi;
Link *link;
LinkTable *linktbl;
if (!strcmp(path, "/")) {
linktbl = ROOT_LINK_TBL;
} else {
linktbl = path_to_Link_LinkTable_new(path);
if(!linktbl) {
return -ENOENT;
}
#ifdef DEBUG
static int j = 0;
lprintf(debug, "!!!!Calling fs_readdir for the %d time!!!!\n", j);
j++;
#endif
linktbl = path_to_Link_LinkTable_new(path);
if (!linktbl) {
return -ENOENT;
}
/* start adding the links */
/*
* start adding the links
*/
dir_add(buf, ".", NULL, 0);
dir_add(buf, "..", NULL, 0);
/* We skip the head link */
for (int i = 1; i < linktbl->num; i++) {
link = linktbl->links[i];
Link *link = linktbl->links[i];
if (link->type != LINK_INVALID) {
dir_add(buf, link->linkname, NULL, 0);
}
@ -143,12 +173,12 @@ static int fs_readdir(const char *path, void *buf, fuse_fill_dir_t dir_add,
}
static struct fuse_operations fs_oper = {
.getattr = fs_getattr,
.readdir = fs_readdir,
.open = fs_open,
.read = fs_read,
.init = fs_init,
.release = fs_release
.getattr = fs_getattr,
.readdir = fs_readdir,
.open = fs_open,
.read = fs_read,
.init = fs_init,
.release = fs_release
};
int fuse_local_init(int argc, char **argv)

View File

@ -1,6 +1,11 @@
#ifndef FUSE_LOCAL_H
#define FUSE_LOCAL_H
/**
* \file fuse_local.h
* \brief FUSE related functions
*/
/* Initialise fuse */
int fuse_local_init(int argc, char **argv);

1310
src/link.c

File diff suppressed because it is too large Load Diff

View File

@ -1,45 +1,66 @@
#ifndef LINK_H
#define LINK_H
#include "util.h"
/**
* \file link.h
* \brief link related structures and functions
*/
typedef struct Link Link;
typedef struct LinkTable LinkTable;
#include "cache.h"
#include "config.h"
#include "network.h"
#include "sonic.h"
#include <curl/curl.h>
/** \brief the link type */
/**
* \brief the link type
*/
typedef enum {
LINK_HEAD = 'H',
LINK_DIR = 'D',
LINK_FILE = 'F',
LINK_INVALID = 'I'
LINK_INVALID = 'I',
LINK_UNINITIALISED_FILE = 'U'
} LinkType;
/**
* \brief link table type
* \details index 0 contains the Link for the base URL
*/
typedef struct LinkTable LinkTable;
/** \brief link data type */
typedef struct Link Link;
/**
* \brief Link data structure
*/
struct Link {
char linkname[MAX_FILENAME_LEN+1]; /**< The link name in the last level of
the URL */
char f_url[MAX_PATH_LEN+1]; /**< The full URL of the file */
LinkType type; /**< The type of the link */
size_t content_length; /**< CURLINFO_CONTENT_LENGTH_DOWNLOAD of the file */
LinkTable *next_table; /**< The next LinkTable level, if it is a LINK_DIR */
long time; /**< CURLINFO_FILETIME obtained from the server */
};
struct LinkTable {
int num;
time_t index_time;
Link **links;
};
/**
* \brief Link type data structure
*/
struct Link {
/** \brief The link name in the last level of the URL */
char linkname[MAX_FILENAME_LEN + 1];
/** \brief This is for storing the unescaped path */
char linkpath[MAX_FILENAME_LEN + 1];
/** \brief The full URL of the file */
char f_url[MAX_PATH_LEN + 1];
/** \brief The type of the link */
LinkType type;
/** \brief CURLINFO_CONTENT_LENGTH_DOWNLOAD of the file */
size_t content_length;
/** \brief The next LinkTable level, if it is a LINK_DIR */
LinkTable *next_table;
/** \brief CURLINFO_FILETIME obtained from the server */
long time;
/** \brief The pointer associated with the cache file */
Cache *cache_ptr;
/** \brief Stores *sonic related data */
Sonic sonic;
};
/**
* \brief root link table
*/
@ -51,14 +72,14 @@ extern LinkTable *ROOT_LINK_TBL;
extern int ROOT_LINK_OFFSET;
/**
* \brief
* \brief initialise link sub-system.
*/
void Link_get_stat(Link *this_link);
LinkTable *LinkSystem_init(const char *raw_url);
/**
* \brief set the stats for a file
* \brief Set the stats of a link, after curl multi handle finished querying
*/
void Link_set_stat(Link* this_link, CURL *curl);
void Link_set_file_stat(Link *this_link, CURL *curl);
/**
* \brief create a new LinkTable
@ -66,12 +87,19 @@ void Link_set_stat(Link* this_link, CURL *curl);
LinkTable *LinkTable_new(const char *url);
/**
* \brief download a link
* \brief download a path
* \return the number of bytes downloaded
*/
long path_download(const char *path, char *output_buf, size_t size,
off_t offset);
/**
* \brief Download a Link
* \return the number of bytes downloaded
*/
long Link_download(Link *link, char *output_buf, size_t req_size,
off_t offset);
/**
* \brief find the link associated with a path
*/
@ -89,6 +117,34 @@ int LinkTable_disk_save(LinkTable *linktbl, const char *dirn);
/**
* \brief load a link table from the disk.
* \param[in] dirn We expected the unescaped_path here!
*/
LinkTable *LinkTable_disk_open(const char *dirn);
/**
* \brief Download a link's content to the memory
* \warning You MUST free the memory field in TransferStruct after use!
*/
TransferStruct Link_download_full(Link *head_link);
/**
* \brief Allocate a LinkTable
* \note This does not fill in the LinkTable.
*/
LinkTable *LinkTable_alloc(const char *url);
/**
* \brief free a LinkTable
*/
void LinkTable_free(LinkTable *linktbl);
/**
* \brief print a LinkTable
*/
void LinkTable_print(LinkTable *linktbl);
/**
* \brief add a Link to a LinkTable
*/
void LinkTable_add(LinkTable *linktbl, Link *link);
#endif

76
src/log.c Normal file
View File

@ -0,0 +1,76 @@
#include "log.h"
#include "config.h"
#include "util.h"
#include <curl/curl.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
int log_level_init()
{
char *env = getenv("HTTPDIRFS_LOG_LEVEL");
if (env) {
return atoi(env);
}
#ifdef DEBUG
return DEFAULT_LOG_LEVEL | debug;
#else
return DEFAULT_LOG_LEVEL;
#endif
}
void
log_printf(LogType type, const char *file, const char *func, int line,
const char *format, ...)
{
if (type & CONFIG.log_type) {
switch (type) {
case fatal:
fprintf(stderr, "Fatal:");
break;
case error:
fprintf(stderr, "Error:");
break;
case warning:
fprintf(stderr, "Warning:");
break;
case info:
goto print_actual_message;
default:
fprintf(stderr, "Debug");
if (type != debug) {
fprintf(stderr, "(%x)", type);
}
fprintf(stderr, ":");
break;
}
fprintf(stderr, "%s:%d:", file, line);
print_actual_message: {
}
fprintf(stderr, "%s: ", func);
va_list args;
va_start(args, format);
vfprintf(stderr, format, args);
va_end(args);
if (type == fatal) {
exit_failure();
}
}
}
void print_version()
{
/* FUSE prints its help to stderr */
fprintf(stderr, "HTTPDirFS version " VERSION "\n");
/*
* --------- Print off SSL engine version ---------
*/
curl_version_info_data *data = curl_version_info(CURLVERSION_NOW);
fprintf(stderr, "libcurl SSL engine: %s\n", data->ssl_version);
}

49
src/log.h Normal file
View File

@ -0,0 +1,49 @@
#ifndef LOG_H
#define LOG_H
/**
* \brief Log types
*/
typedef enum {
fatal = 1 << 0,
error = 1 << 1,
warning = 1 << 2,
info = 1 << 3,
debug = 1 << 4,
link_lock_debug = 1 << 5,
network_lock_debug = 1 << 6,
cache_lock_debug = 1 << 7,
memcache_debug = 1 << 8,
libcurl_debug = 1 << 9,
} LogType;
/**
* \brief The default log level
*/
#define DEFAULT_LOG_LEVEL fatal | error | warning | info
/**
* \brief Get the log level from the environment.
*/
int log_level_init();
/**
* \brief Log printf
* \details This is for printing nice log messages
*/
void log_printf(LogType type, const char *file, const char *func, int line,
const char *format, ...);
/**
* \brief Log type printf
* \details This macro automatically prints out the filename and line number
*/
#define lprintf(type, ...) \
log_printf(type, __FILE__, __func__, __LINE__, __VA_ARGS__)
/**
* \brief Print the version information for HTTPDirFS
*/
void print_version();
#endif

View File

@ -1,33 +1,41 @@
#include "cache.h"
#include "fuse_local.h"
#include "network.h"
#include "link.h"
#include "log.h"
#include "util.h"
#include <getopt.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void add_arg(char ***fuse_argv_ptr, int *fuse_argc, char *opt_string);
static void print_help(char *program_name, int long_help);
static void print_version();
static void print_long_help();
static int
parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc);
void parse_config_file(char ***argv, int *argc);
static char *config_path = NULL;
int main(int argc, char **argv)
{
/* Automatically print help if not enough arguments are supplied */
/*
* Automatically print help if not enough arguments are supplied
*/
if (argc < 2) {
print_help(argv[0], 0);
fprintf(stderr, "For more information, run \"%s --help.\"\n", argv[0]);
fprintf(stderr, "For more information, run \"%s --help.\"\n",
argv[0]);
exit(EXIT_FAILURE);
}
/* These are passed into fuse initialiser */
/*
* These are passed into fuse initialiser
*/
char **fuse_argv = NULL;
int fuse_argc = 0;
/* These are the combined argument with the config file */
/*
* These are the combined argument with the config file
*/
char **all_argv = NULL;
int all_argc = 0;
@ -36,39 +44,71 @@ int main(int argc, char **argv)
/*--- FUSE expects the first initialisation to be the program's name ---*/
add_arg(&fuse_argv, &fuse_argc, argv[0]);
/* initialise network configuration struct */
network_config_init();
/*
* initialise network configuration struct
*/
Config_init();
/* parse the config file, if it exists, store it in all_argv and all_argc */
parse_config_file(&all_argv, &all_argc);
/*
* initialise network subsystem
*/
NetworkSystem_init();
/* Copy the command line argument list to the combined argument list */
/*
* Copy the command line argument list to the combined argument list
*/
for (int i = 1; i < argc; i++) {
add_arg(&all_argv, &all_argc, argv[i]);
if (!strcmp(argv[i], "--config")) {
config_path = strdup(argv[i + 1]);
}
}
/* parse the combined argument list */
/*
* parse the config file, if it exists, store it in all_argv and
* all_argc
*/
parse_config_file(&all_argv, &all_argc);
/*
* parse the combined argument list
*/
if (parse_arg_list(all_argc, all_argv, &fuse_argv, &fuse_argc)) {
/*
* The user basically didn't supply enough arguments, if we reach here
* The point is to print some error messages
*/
goto fuse_start;
}
/*--- Add the last remaining argument, which is the mountpoint ---*/
add_arg(&fuse_argv, &fuse_argc, argv[argc-1]);
add_arg(&fuse_argv, &fuse_argc, argv[argc - 1]);
/* The second last remaining argument is the URL */
char *base_url = argv[argc-2];
if (strncmp(base_url, "http://", 7) && strncmp(base_url, "https://", 8)) {
/*
* The second last remaining argument is the URL
*/
char *base_url = argv[argc - 2];
if (strncmp(base_url, "http://", 7)
&& strncmp(base_url, "https://", 8)) {
fprintf(stderr, "Error: Please supply a valid URL.\n");
print_help(argv[0], 0);
exit(EXIT_FAILURE);
} else {
if(!network_init(base_url)) {
fprintf(stderr, "Error: Network initialisation failed.\n");
if (CONFIG.sonic_username && CONFIG.sonic_password) {
CONFIG.mode = SONIC;
} else if (CONFIG.sonic_username || CONFIG.sonic_password) {
fprintf(stderr,
"Error: You have to supply both username and password to \
activate Sonic mode.\n");
exit(EXIT_FAILURE);
}
if (!LinkSystem_init(base_url)) {
fprintf(stderr, "Network initialisation failed.\n");
exit(EXIT_FAILURE);
}
}
fuse_start:
fuse_start:
fuse_local_init(fuse_argc, fuse_argv);
return 0;
@ -76,15 +116,22 @@ int main(int argc, char **argv)
void parse_config_file(char ***argv, int *argc)
{
char *xdg_config_home = getenv("XDG_CONFIG_HOME");
if (!xdg_config_home) {
char *home = getenv("HOME");
char *xdg_config_home_default = "/.config";
xdg_config_home = path_append(home, xdg_config_home_default);
char *full_path;
if (!config_path) {
char *xdg_config_home = getenv("XDG_CONFIG_HOME");
if (!xdg_config_home) {
char *home = getenv("HOME");
char *xdg_config_home_default = "/.config";
xdg_config_home = path_append(home, xdg_config_home_default);
}
full_path = path_append(xdg_config_home, "/httpdirfs/config");
} else {
full_path = config_path;
}
char *full_path = path_append(xdg_config_home, "/httpdirfs/config");
/* The buffer has to be able to fit a URL */
/*
* The buffer has to be able to fit a URL
*/
int buf_len = MAX_PATH_LEN;
char buf[buf_len];
FILE *config = fopen(full_path, "r");
@ -96,120 +143,180 @@ void parse_config_file(char ***argv, int *argc)
char *space;
space = strchr(buf, ' ');
if (!space) {
*argv = realloc(*argv, *argc * sizeof(char **));
*argv = realloc(*argv, *argc * sizeof(char *));
(*argv)[*argc - 1] = strndup(buf, buf_len);
} else {
(*argc)++;
*argv = realloc(*argv, *argc * sizeof(char **));
/* Only copy up to the space character*/
*argv = realloc(*argv, *argc * sizeof(char *));
/*
* Only copy up to the space character
*/
(*argv)[*argc - 2] = strndup(buf, space - buf);
/* Starts copying after the space */
/*
* Starts copying after the space
*/
(*argv)[*argc - 1] = strndup(space + 1,
buf_len - (space + 1 - buf));
buf_len -
(space + 1 - buf));
}
}
}
fclose(config);
}
FREE(full_path);
}
static int
parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
{
char c;
int c;
int long_index = 0;
const char *short_opts = "o:hVdfsp:u:P:";
const struct option long_opts[] = {
/* Note that 'L' is returned for long options */
{"help", no_argument, NULL, 'h'}, /* 0 */
{"version", no_argument, NULL, 'V'}, /* 1 */
{"debug", no_argument, NULL, 'd'}, /* 2 */
{"username", required_argument, NULL, 'u'}, /* 3 */
{"password", required_argument, NULL, 'p'}, /* 4 */
{"proxy", required_argument, NULL, 'P'}, /* 5 */
{"proxy-username", required_argument, NULL, 'L'}, /* 6 */
{"proxy-password", required_argument, NULL, 'L'}, /* 7 */
{"cache", no_argument, NULL, 'L'}, /* 8 */
{"dl-seg-size", required_argument, NULL, 'L'}, /* 9 */
{"max-seg-count", required_argument, NULL, 'L'}, /* 10 */
{"max-conns", required_argument, NULL, 'L'}, /* 11 */
{"user-agent", required_argument, NULL, 'L'}, /* 12 */
{"retry-wait", required_argument, NULL, 'L'}, /* 13 */
{"cache-location", required_argument, NULL, 'L'}, /* 14 */
{0, 0, 0, 0}
/*
* Note that 'L' is returned for long options
*/
{ "help", no_argument, NULL, 'h' }, /* 0 */
{ "version", no_argument, NULL, 'V' }, /* 1 */
{ "debug", no_argument, NULL, 'd' }, /* 2 */
{ "username", required_argument, NULL, 'u' }, /* 3 */
{ "password", required_argument, NULL, 'p' }, /* 4 */
{ "proxy", required_argument, NULL, 'P' }, /* 5 */
{ "proxy-username", required_argument, NULL, 'L' }, /* 6 */
{ "proxy-password", required_argument, NULL, 'L' }, /* 7 */
{ "cache", no_argument, NULL, 'L' }, /* 8 */
{ "dl-seg-size", required_argument, NULL, 'L' }, /* 9 */
{ "max-seg-count", required_argument, NULL, 'L' }, /* 10 */
{ "max-conns", required_argument, NULL, 'L' }, /* 11 */
{ "user-agent", required_argument, NULL, 'L' }, /* 12 */
{ "retry-wait", required_argument, NULL, 'L' }, /* 13 */
{ "cache-location", required_argument, NULL, 'L' }, /* 14 */
{ "sonic-username", required_argument, NULL, 'L' }, /* 15 */
{ "sonic-password", required_argument, NULL, 'L' }, /* 16 */
{ "sonic-id3", no_argument, NULL, 'L' }, /* 17 */
{ "no-range-check", no_argument, NULL, 'L' }, /* 18 */
{ "sonic-insecure", no_argument, NULL, 'L' }, /* 19 */
{ "insecure-tls", no_argument, NULL, 'L' }, /* 20 */
{ "config", required_argument, NULL, 'L' }, /* 21 */
{ "single-file-mode", required_argument, NULL, 'L' }, /* 22 */
{ "cacert", required_argument, NULL, 'L' }, /* 23 */
{ "proxy-cacert", required_argument, NULL, 'L' }, /* 24 */
{ "refresh-timeout", required_argument, NULL, 'L' }, /* 25 */
{ 0, 0, 0, 0 }
};
while ((c =
getopt_long(argc, argv, short_opts, long_opts,
&long_index)) != -1) {
getopt_long(argc, argv, short_opts, long_opts,
&long_index)) != -1) {
switch (c) {
case 'o':
add_arg(fuse_argv, fuse_argc, "-o");
add_arg(fuse_argv, fuse_argc, optarg);
case 'o':
add_arg(fuse_argv, fuse_argc, "-o");
add_arg(fuse_argv, fuse_argc, optarg);
break;
case 'h':
print_help(argv[0], 1);
add_arg(fuse_argv, fuse_argc, "-ho");
/*
* skip everything else to print the help
*/
return 1;
case 'V':
print_version();
add_arg(fuse_argv, fuse_argc, "-V");
return 1;
case 'd':
add_arg(fuse_argv, fuse_argc, "-d");
CONFIG.log_type |= debug;
break;
case 'f':
add_arg(fuse_argv, fuse_argc, "-f");
break;
case 's':
add_arg(fuse_argv, fuse_argc, "-s");
break;
case 'u':
CONFIG.http_username = strdup(optarg);
break;
case 'p':
CONFIG.http_password = strdup(optarg);
break;
case 'P':
CONFIG.proxy = strdup(optarg);
break;
case 'L':
/*
* Long options
*/
switch (long_index) {
case 6:
CONFIG.proxy_username = strdup(optarg);
break;
case 'h':
print_help(argv[0], 1);
add_arg(fuse_argv, fuse_argc, "-ho");
/* skip everything else to print the help */
return 1;
case 'V':
print_version(argv[0], 1);
add_arg(fuse_argv, fuse_argc, "-V");
return 1;
case 'd':
add_arg(fuse_argv, fuse_argc, "-d");
case 7:
CONFIG.proxy_password = strdup(optarg);
break;
case 'f':
add_arg(fuse_argv, fuse_argc, "-f");
case 8:
CONFIG.cache_enabled = 1;
break;
case 's':
add_arg(fuse_argv, fuse_argc, "-s");
case 9:
CONFIG.data_blksz = atoi(optarg) * 1024 * 1024;
break;
case 'u':
NETWORK_CONFIG.username = strdup(optarg);
case 10:
CONFIG.max_segbc = atoi(optarg);
break;
case 'p':
NETWORK_CONFIG.password = strdup(optarg);
case 11:
CONFIG.max_conns = atoi(optarg);
break;
case 'P':
NETWORK_CONFIG.proxy = strdup(optarg);
case 12:
CONFIG.user_agent = strdup(optarg);
break;
case 'L':
/* Long options */
switch (long_index) {
case 6:
NETWORK_CONFIG.proxy_user = strdup(optarg);
break;
case 7:
NETWORK_CONFIG.proxy_pass = strdup(optarg);
break;
case 8:
NETWORK_CONFIG.cache_enabled = 1;
break;
case 9:
DATA_BLK_SZ = atoi(optarg) * 1024 * 1024;
break;
case 10:
MAX_SEGBC = atoi(optarg);
break;
case 11:
NETWORK_CONFIG.max_conns = atoi(optarg);
break;
case 12:
NETWORK_CONFIG.user_agent = strdup(optarg);
break;
case 13:
HTTP_429_WAIT = atoi(optarg);
break;
case 14:
NETWORK_CONFIG.cache_dir = strdup(optarg);
break;
default:
fprintf(stderr, "see httpdirfs -h for usage\n");
return 1;
}
case 13:
CONFIG.http_wait_sec = atoi(optarg);
break;
case 14:
CONFIG.cache_dir = strdup(optarg);
break;
case 15:
CONFIG.sonic_username = strdup(optarg);
break;
case 16:
CONFIG.sonic_password = strdup(optarg);
break;
case 17:
CONFIG.sonic_id3 = 1;
break;
case 18:
CONFIG.no_range_check = 1;
break;
case 19:
CONFIG.sonic_insecure = 1;
break;
case 20:
CONFIG.insecure_tls = 1;
break;
case 21:
/*
* This is for --config, we don't need to do anything
*/
break;
case 22:
CONFIG.mode = SINGLE;
break;
case 23:
CONFIG.cafile = strdup(optarg);
break;
case 24:
CONFIG.proxy_cafile = strdup(optarg);
break;
case 25:
CONFIG.refresh_timeout = atoi(optarg);
break;
default:
fprintf(stderr, "see httpdirfs -h for usage\n");
return 1;
}
break;
default:
fprintf(stderr, "see httpdirfs -h for usage\n");
return 1;
}
};
return 0;
@ -229,27 +336,22 @@ void add_arg(char ***fuse_argv_ptr, int *fuse_argc, char *opt_string)
static void print_help(char *program_name, int long_help)
{
fprintf(stderr,
"usage: %s [options] URL mountpoint\n", program_name);
/* FUSE prints its help to stderr */
fprintf(stderr, "usage: %s [options] URL mountpoint\n", program_name);
if (long_help) {
print_long_help();
}
}
static void print_version()
{
fprintf(stderr,
"HTTPDirFS version %s\n", VERSION);
}
static void print_long_help()
{
fprintf(stderr,
"\n\
/* FUSE prints its help to stderr */
fprintf(stderr, "\n\
general options:\n\
-o opt,[opt...] mount options\n\
-h --help print help\n\
-V --version print version\n\
--config Specify a configuration file \n\
-o opt,[opt...] Mount options\n\
-h --help Print help\n\
-V --version Print version\n\
\n\
HTTPDirFS options:\n\
-u --username HTTP authentication username\n\
@ -258,9 +360,11 @@ HTTPDirFS options:\n\
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html\n\
--proxy-username Username for the proxy\n\
--proxy-password Password for the proxy\n\
--proxy-cacert Certificate authority for the proxy\n\
--cache Enable cache (default: off)\n\
--cache-location Set a custom cache location\n\
(default: \"${XDG_CACHE_HOME}/httpdirfs\")\n\
--cacert Certificate authority for the server\n\
--dl-seg-size Set cache download segment size, in MB (default: 8)\n\
Note: this setting is ignored if previously\n\
cached data is found for the requested file.\n\
@ -271,9 +375,26 @@ HTTPDirFS options:\n\
to 1TB in size using the default segment size.\n\
--max-conns Set maximum number of network connections that\n\
libcurl is allowed to make. (default: 10)\n\
--refresh-timeout The directories are refreshed after the specified\n\
time, in seconds (default: 3600)\n\
--retry-wait Set delay in seconds before retrying an HTTP request\n\
after encountering an error. (default: 5)\n\
--user-agent Set user agent string (default: \"HTTPDirFS\")\n\
--no-range-check Disable the built-in check for the server's support\n\
for HTTP range requests\n\
--insecure-tls Disable licurl TLS certificate verification by\n\
setting CURLOPT_SSL_VERIFYHOST to 0\n\
--single-file-mode Single file mode - rather than mounting a whole\n\
directory, present a single file inside a virtual\n\
directory.\n\
\n\
");
For mounting a Airsonic / Subsonic server:\n\
--sonic-username The username for your Airsonic / Subsonic server\n\
--sonic-password The password for your Airsonic / Subsonic server\n\
--sonic-id3 Enable ID3 mode - this present the server content in\n\
Artist/Album/Song layout \n\
--sonic-insecure Authenticate against your Airsonic / Subsonic server\n\
using the insecure username / hex encoded password\n\
scheme\n\
\n");
}

28
src/memcache.c Normal file
View File

@ -0,0 +1,28 @@
#include "memcache.h"
#include "log.h"
#include "util.h"
#include <stdlib.h>
#include <string.h>
size_t write_memory_callback(void *recv_data, size_t size, size_t nmemb,
void *userp)
{
size_t recv_size = size * nmemb;
TransferStruct *ts = (TransferStruct *) userp;
ts->data = realloc(ts->data, ts->curr_size + recv_size + 1);
if (!ts->data) {
/*
* out of memory!
*/
lprintf(fatal, "realloc failure!\n");
}
memmove(&ts->data[ts->curr_size], recv_data, recv_size);
ts->curr_size += recv_size;
ts->data[ts->curr_size] = '\0';
return recv_size;
}

35
src/memcache.h Normal file
View File

@ -0,0 +1,35 @@
#ifndef memcache_H
#define memcache_H
#include "link.h"
/**
* \brief specify the type of data transfer
*/
typedef enum {
FILESTAT = 's',
DATA = 'd'
} TransferType;
/**
* \brief For storing transfer data and metadata
*/
struct TransferStruct {
/** \brief The array to store the data */
char *data;
/** \brief The current size of the array */
size_t curr_size;
/** \brief The type of transfer being done */
TransferType type;
/** \brief Whether transfer is in progress */
volatile int transferring;
/** \brief The link associated with the transfer */
Link *link;
};
/**
* \brief Callback function for file transfer
*/
size_t write_memory_callback(void *contents, size_t size, size_t nmemb,
void *userp);
#endif

View File

@ -1,24 +1,24 @@
#include "network.h"
#include "cache.h"
#include "log.h"
#include "memcache.h"
#include "util.h"
#include <openssl/crypto.h>
#include <errno.h>
#include <pthread.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>
#define DEFAULT_NETWORK_MAX_CONNS 10
#define DEFAULT_HTTP_429_WAIT 5
/* ----------------- External variables ---------------------- */
/*
* ----------------- External variables ----------------------
*/
CURLSH *CURL_SHARE;
NetworkConfigStruct NETWORK_CONFIG;
int HTTP_429_WAIT = DEFAULT_HTTP_429_WAIT;
/* ----------------- Static variable ----------------------- */
/*
* ----------------- Static variable -----------------------
*/
/** \brief curl multi interface handle */
static CURLM *curl_multi;
/** \brief mutex for transfer functions */
@ -27,93 +27,125 @@ static pthread_mutex_t transfer_lock;
static pthread_mutex_t *crypto_lockarray;
/** \brief mutex for curl share interface itself */
static pthread_mutex_t curl_lock;
/** \brief network configuration */
/* -------------------- Functions -------------------------- */
/*
* -------------------- Functions --------------------------
*/
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-function"
/**
* \brief OpenSSL 1.02 cryptography callback function
* \details Required for OpenSSL 1.02, but not OpenSSL 1.1
*/
static void crypto_lock_callback(int mode, int type, char *file, int line)
{
(void)file;
(void)line;
if(mode & CRYPTO_LOCK) {
pthread_mutex_lock(&(crypto_lockarray[type]));
(void) file;
(void) line;
if (mode & CRYPTO_LOCK) {
PTHREAD_MUTEX_LOCK(&(crypto_lockarray[type]));
} else {
pthread_mutex_unlock(&(crypto_lockarray[type]));
PTHREAD_MUTEX_UNLOCK(&(crypto_lockarray[type]));
}
}
/**
* \brief OpenSSL 1.02 thread ID function
* \details Required for OpenSSL 1.02, but not OpenSSL 1.1
*/
static unsigned long thread_id(void)
{
unsigned long ret;
ret = (unsigned long)pthread_self();
ret = (unsigned long) pthread_self();
return ret;
}
#pragma GCC diagnostic pop
static void crypto_lock_init(void)
{
int i;
crypto_lockarray = (pthread_mutex_t *)OPENSSL_malloc(CRYPTO_num_locks() *
sizeof(pthread_mutex_t));
for(i = 0; i<CRYPTO_num_locks(); i++) {
pthread_mutex_init(&(crypto_lockarray[i]), NULL);
crypto_lockarray =
(pthread_mutex_t *) OPENSSL_malloc(CRYPTO_num_locks() *
sizeof(pthread_mutex_t));
for (i = 0; i < CRYPTO_num_locks(); i++) {
if (pthread_mutex_init(&(crypto_lockarray[i]), NULL)) {
lprintf(fatal, "crypto_lockarray[%d] initialisation \
failed!\n", i);
};
}
CRYPTO_set_id_callback((unsigned long (*)())thread_id);
CRYPTO_set_locking_callback((void (*)())crypto_lock_callback);
CRYPTO_set_id_callback((unsigned long (*)()) thread_id);
CRYPTO_set_locking_callback((void (*)()) crypto_lock_callback);
}
/**
* Adapted from:
* \brief Curl share handle callback function
* \details Adapted from:
* https://curl.haxx.se/libcurl/c/threaded-shared-conn.html
*/
static void curl_callback_lock(CURL *handle, curl_lock_data data,
curl_lock_access access, void *userptr)
static void
curl_callback_lock(CURL *handle, curl_lock_data data,
curl_lock_access access, void *userptr)
{
(void)access; /* unused */
(void)userptr; /* unused */
(void)handle; /* unused */
(void)data; /* unused */
pthread_mutex_lock(&curl_lock);
(void) access; /* unused */
(void) userptr; /* unused */
(void) handle; /* unused */
(void) data; /* unused */
PTHREAD_MUTEX_LOCK(&curl_lock);
}
static void curl_callback_unlock(CURL *handle, curl_lock_data data,
void *userptr)
static void
curl_callback_unlock(CURL *handle, curl_lock_data data, void *userptr)
{
(void)userptr; /* unused */
(void)handle; /* unused */
(void)data; /* unused */
pthread_mutex_unlock(&curl_lock);
(void) userptr; /* unused */
(void) handle; /* unused */
(void) data; /* unused */
PTHREAD_MUTEX_UNLOCK(&curl_lock);
}
/**
* Adapted from:
* \brief Process a curl message
* \details Adapted from:
* https://curl.haxx.se/libcurl/c/10-at-a-time.html
*/
static void curl_process_msgs(CURLMsg *curl_msg, int n_running_curl,
int n_mesgs)
static void
curl_process_msgs(CURLMsg *curl_msg, int n_running_curl, int n_mesgs)
{
(void) n_running_curl;
(void) n_mesgs;
static int slept = 0;
static volatile int slept = 0;
if (curl_msg->msg == CURLMSG_DONE) {
TransferStruct *transfer;
TransferStruct *ts;
CURL *curl = curl_msg->easy_handle;
curl_easy_getinfo(curl_msg->easy_handle, CURLINFO_PRIVATE,
&transfer);
transfer->transferring = 0;
CURLcode ret =
curl_easy_getinfo(curl_msg->easy_handle, CURLINFO_PRIVATE,
&ts);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
ts->transferring = 0;
char *url = NULL;
curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, &url);
ret = curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, &url);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
/* Wait for 5 seconds if we get HTTP 429 */
/*
* Wait for 5 seconds if we get HTTP 429
*/
long http_resp = 0;
curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &http_resp);
if (http_resp == HTTP_TOO_MANY_REQUESTS) {
ret = curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &http_resp);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
if (HTTP_temp_failure(http_resp)) {
if (!slept) {
fprintf(stderr,
"curl_process_msgs(): HTTP 429, sleeping for %d sec\n",
HTTP_429_WAIT);
sleep(HTTP_429_WAIT);
lprintf(warning,
"HTTP %ld, sleeping for %d sec\n",
http_resp, CONFIG.http_wait_sec);
sleep(CONFIG.http_wait_sec);
slept = 1;
}
} else {
@ -121,151 +153,116 @@ static void curl_process_msgs(CURLMsg *curl_msg, int n_running_curl,
}
if (!curl_msg->data.result) {
/* Transfer successful, query the file size */
if (transfer->type == FILESTAT) {
Link_set_stat(transfer->link, curl);
/*
* Transfer successful, set the file size
*/
if (ts->type == FILESTAT) {
Link_set_file_stat(ts->link, curl);
}
} else {
fprintf(stderr, "curl_process_msgs(): %d - %s <%s>\n",
lprintf(error, "%d - %s <%s>\n",
curl_msg->data.result,
curl_easy_strerror(curl_msg->data.result),
url);
curl_easy_strerror(curl_msg->data.result), url);
}
curl_multi_remove_handle(curl_multi, curl);
/* clean up the handle, if we are querying the file size */
if (transfer->type == FILESTAT) {
/*
* clean up the handle, if we are querying the file size
*/
if (ts->type == FILESTAT) {
curl_easy_cleanup(curl);
free(transfer);
FREE(ts);
}
} else {
fprintf(stderr, "curl_process_msgs(): curl_msg->msg: %d\n",
curl_msg->msg);
lprintf(warning, "curl_msg->msg: %d\n", curl_msg->msg);
}
}
/**
* \details effectively based on
* \details effectively based on
* https://curl.haxx.se/libcurl/c/multi-double.html
*/
int curl_multi_perform_once()
int curl_multi_perform_once(void)
{
pthread_mutex_lock(&transfer_lock);
/* Get curl multi interface to perform pending tasks */
lprintf(network_lock_debug,
"thread %x: locking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_LOCK(&transfer_lock);
/*
* Get curl multi interface to perform pending tasks
*/
int n_running_curl;
CURLMcode mc = curl_multi_perform(curl_multi, &n_running_curl);
if(mc > 0) {
fprintf(stderr, "curl_multi_perform(): %s\n", curl_multi_strerror(mc));
if (mc) {
lprintf(error, "%s\n", curl_multi_strerror(mc));
}
fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd = -1;
long curl_timeo = -1;
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
/* set a default timeout for select() */
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
curl_multi_timeout(curl_multi, &curl_timeo);
/* We effectively cap timeout to 1 sec */
if (curl_timeo >= 0) {
timeout.tv_sec = curl_timeo / 1000;
if (timeout.tv_sec > 1) {
timeout.tv_sec = 1;
} else {
timeout.tv_usec = (curl_timeo % 1000) * 1000;
}
mc = curl_multi_poll(curl_multi, NULL, 0, 100, NULL);
if (mc) {
lprintf(error, "%s\n", curl_multi_strerror(mc));
}
/* get file descriptors from the transfers */
mc = curl_multi_fdset(curl_multi, &fdread, &fdwrite, &fdexcep, &maxfd);
if (mc > 0) {
fprintf(stderr, "curl_multi_fdset(): %s.\n", curl_multi_strerror(mc));
}
if (maxfd == -1) {
usleep(100*1000);
} else {
if (select(maxfd + 1, &fdread, &fdwrite, &fdexcep, &timeout) < 0) {
fprintf(stderr, "curl_multi_perform_once(): select(): %s.\n",
strerror(errno));
}
}
/* Process the message queue */
/*
* Process the message queue
*/
int n_mesgs;
CURLMsg *curl_msg;
while((curl_msg = curl_multi_info_read(curl_multi, &n_mesgs))) {
while ((curl_msg = curl_multi_info_read(curl_multi, &n_mesgs))) {
curl_process_msgs(curl_msg, n_running_curl, n_mesgs);
}
pthread_mutex_unlock(&transfer_lock);
lprintf(network_lock_debug,
"thread %x: unlocking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&transfer_lock);
return n_running_curl;
}
void network_config_init()
void NetworkSystem_init(void)
{
NETWORK_CONFIG.username = NULL;
NETWORK_CONFIG.password = NULL;
NETWORK_CONFIG.proxy = NULL;
NETWORK_CONFIG.proxy_user = NULL;
NETWORK_CONFIG.proxy_pass = NULL;
NETWORK_CONFIG.max_conns = DEFAULT_NETWORK_MAX_CONNS;
NETWORK_CONFIG.user_agent = "HTTPDirFS";
NETWORK_CONFIG.cache_enabled = 0;
NETWORK_CONFIG.cache_dir = NULL;
}
LinkTable *network_init(const char *url)
{
/* ------- Global related ----------*/
/*
* ------- Global related ----------
*/
if (curl_global_init(CURL_GLOBAL_ALL)) {
fprintf(stderr, "network_init(): curl_global_init() failed!\n");
exit(EXIT_FAILURE);
lprintf(fatal, "curl_global_init() failed!\n");
}
/* -------- Share related ----------*/
/*
* -------- Share related ----------
*/
CURL_SHARE = curl_share_init();
if (!(CURL_SHARE)) {
fprintf(stderr, "network_init(): curl_share_init() failed!\n");
exit(EXIT_FAILURE);
lprintf(fatal, "curl_share_init() failed!\n");
}
curl_share_setopt(CURL_SHARE, CURLSHOPT_SHARE, CURL_LOCK_DATA_COOKIE);
curl_share_setopt(CURL_SHARE, CURLSHOPT_SHARE, CURL_LOCK_DATA_DNS);
curl_share_setopt(CURL_SHARE, CURLSHOPT_SHARE, CURL_LOCK_DATA_SSL_SESSION);
curl_share_setopt(CURL_SHARE, CURLSHOPT_SHARE,
CURL_LOCK_DATA_SSL_SESSION);
if (pthread_mutex_init(&curl_lock, NULL) != 0) {
printf(
"network_init(): curl_lock initialisation failed!\n");
exit(EXIT_FAILURE);
if (pthread_mutex_init(&curl_lock, NULL)) {
lprintf(fatal, "curl_lock initialisation failed!\n");
}
curl_share_setopt(CURL_SHARE, CURLSHOPT_LOCKFUNC, curl_callback_lock);
curl_share_setopt(CURL_SHARE, CURLSHOPT_UNLOCKFUNC, curl_callback_unlock);
curl_share_setopt(CURL_SHARE, CURLSHOPT_UNLOCKFUNC,
curl_callback_unlock);
/* ------------- Multi related -----------*/
/*
* ------------- Multi related -----------
*/
curl_multi = curl_multi_init();
if (!curl_multi) {
fprintf(stderr, "network_init(): curl_multi_init() failed!\n");
exit(EXIT_FAILURE);
lprintf(fatal, "curl_multi_init() failed!\n");
}
curl_multi_setopt(curl_multi, CURLMOPT_MAX_TOTAL_CONNECTIONS,
NETWORK_CONFIG.max_conns);
CONFIG.max_conns);
curl_multi_setopt(curl_multi, CURLMOPT_MAX_HOST_CONNECTIONS,
NETWORK_CONFIG.max_conns);
CONFIG.max_conns);
/* ------------ Initialise locks ---------*/
/*
* ------------ Initialise locks ---------
*/
if (pthread_mutex_init(&transfer_lock, NULL)) {
fprintf(stderr,
"network_init(): transfer_lock initialisation failed!\n");
exit(EXIT_FAILURE);
lprintf(fatal, "transfer_lock initialisation failed!\n");
}
/*
@ -273,89 +270,58 @@ LinkTable *network_init(const char *url)
* https://curl.haxx.se/libcurl/c/threaded-ssl.html
*/
crypto_lock_init();
/* --------- Print off SSL engine version stream --------- */
curl_version_info_data *data = curl_version_info(CURLVERSION_NOW);
fprintf(stderr, "libcurl SSL engine: %s\n", data->ssl_version);
/* --------- Set the length of the root link ----------- */
/* This is where the '/' should be */
ROOT_LINK_OFFSET = strnlen(url, MAX_PATH_LEN) - 1;
if (url[ROOT_LINK_OFFSET] != '/') {
/*
* If '/' is not there, it is automatically added, so we need to skip 2
* characters
*/
ROOT_LINK_OFFSET += 2;
} else {
/* If '/' is there, we need to skip it */
ROOT_LINK_OFFSET += 1;
}
/* ----------- Enable cache system --------------------*/
if (NETWORK_CONFIG.cache_enabled) {
if (NETWORK_CONFIG.cache_dir) {
CacheSystem_init(NETWORK_CONFIG.cache_dir, 0);
} else {
CacheSystem_init(url, 1);
}
}
/* ----------- Create the root link table --------------*/
ROOT_LINK_TBL = LinkTable_new(url);
return ROOT_LINK_TBL;
}
void transfer_blocking(CURL *curl)
{
/*
* We don't need to malloc here, as the transfer is finished before
* the variable gets popped from the stack
*/
volatile TransferStruct transfer;
transfer.type = DATA;
transfer.transferring = 1;
curl_easy_setopt(curl, CURLOPT_PRIVATE, &transfer);
CURLMcode res = curl_multi_add_handle(curl_multi, curl);
if(res > 0) {
fprintf(stderr, "transfer_blocking(): %d, %s\n",
res, curl_multi_strerror(res));
exit(EXIT_FAILURE);
TransferStruct *ts;
CURLcode ret = curl_easy_getinfo(curl, CURLINFO_PRIVATE, &ts);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
while (transfer.transferring) {
lprintf(network_lock_debug,
"thread %x: locking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_LOCK(&transfer_lock);
CURLMcode res = curl_multi_add_handle(curl_multi, curl);
if (res > 0) {
lprintf(error, "%d, %s\n", res, curl_multi_strerror(res));
}
lprintf(network_lock_debug,
"thread %x: unlocking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&transfer_lock);
while (ts->transferring) {
curl_multi_perform_once();
}
}
void transfer_nonblocking(CURL *curl)
{
lprintf(network_lock_debug,
"thread %x: locking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_LOCK(&transfer_lock);
CURLMcode res = curl_multi_add_handle(curl_multi, curl);
if(res > 0) {
fprintf(stderr, "transfer_nonblocking(): %s\n",
curl_multi_strerror(res));
if (res > 0) {
lprintf(error, "%s\n", curl_multi_strerror(res));
}
lprintf(network_lock_debug,
"thread %x: unlocking transfer_lock;\n", pthread_self());
PTHREAD_MUTEX_UNLOCK(&transfer_lock);
}
size_t write_memory_callback(void *contents, size_t size, size_t nmemb,
void *userp)
int HTTP_temp_failure(HTTPResponseCode http_resp)
{
size_t realsize = size * nmemb;
MemoryStruct *mem = (MemoryStruct *)userp;
mem->memory = realloc(mem->memory, mem->size + realsize + 1);
if(!mem->memory) {
/* out of memory! */
fprintf(stderr, "write_memory_callback(): realloc failure!\n");
exit(EXIT_FAILURE);
switch (http_resp) {
case HTTP_TOO_MANY_REQUESTS:
case HTTP_CLOUDFLARE_UNKNOWN_ERROR:
case HTTP_CLOUDFLARE_TIMEOUT:
return 1;
default:
return 0;
}
memmove(&mem->memory[mem->size], contents, realsize);
mem->size += realsize;
mem->memory[mem->size] = 0;
return realsize;
}

View File

@ -1,61 +1,35 @@
#ifndef NETWORK_H
#define NETWORK_H
/**
* \file network.h
* \brief network related functions
*/
typedef struct TransferStruct TransferStruct;
#include "link.h"
#include <curl/curl.h>
/** \brief HTTP response codes */
typedef enum {
HTTP_OK = 200,
HTTP_PARTIAL_CONTENT = 206,
HTTP_RANGE_NOT_SATISFIABLE = 416,
HTTP_TOO_MANY_REQUESTS = 429
}HTTPResponseCode;
typedef enum {
FILESTAT = 's',
DATA = 'd'
} TransferType;
typedef struct {
char *memory;
size_t size;
} MemoryStruct;
typedef struct {
TransferType type;
int transferring;
Link *link;
} TransferStruct;
typedef struct {
char *username;
char *password;
char *proxy;
char *proxy_user;
char *proxy_pass;
long max_conns;
char *user_agent;
int http_429_wait;
char *cache_dir;
int cache_enabled;
} NetworkConfigStruct;
/** \brief The waiting time after getting HTTP 429 */
extern int HTTP_429_WAIT;
/** \brief CURL configuration */
extern NetworkConfigStruct NETWORK_CONFIG;
HTTP_OK = 200,
HTTP_PARTIAL_CONTENT = 206,
HTTP_RANGE_NOT_SATISFIABLE = 416,
HTTP_TOO_MANY_REQUESTS = 429,
HTTP_CLOUDFLARE_UNKNOWN_ERROR = 520,
HTTP_CLOUDFLARE_TIMEOUT = 524
} HTTPResponseCode;
/** \brief curl shared interface */
extern CURLSH *CURL_SHARE;
/** \brief perform one transfer cycle */
int curl_multi_perform_once();
/** \brief initialise network config struct */
void network_config_init();
int curl_multi_perform_once(void);
/** \brief initialise the network module */
LinkTable *network_init(const char *url);
void NetworkSystem_init(void);
/** \brief blocking file transfer */
void transfer_blocking(CURL *curl);
@ -63,8 +37,9 @@ void transfer_blocking(CURL *curl);
/** \brief non blocking file transfer */
void transfer_nonblocking(CURL *curl);
/** \brief callback function for file transfer */
size_t
write_memory_callback(void *contents, size_t size, size_t nmemb, void *userp);
/**
* \brief check if a HTTP response code corresponds to a temporary failure
*/
int HTTP_temp_failure(HTTPResponseCode http_resp);
#endif

529
src/sonic.c Normal file
View File

@ -0,0 +1,529 @@
#include "sonic.h"
#include "config.h"
#include "log.h"
#include "link.h"
#include "memcache.h"
#include "util.h"
#include <expat.h>
#include <assert.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
typedef struct {
char *server;
char *username;
char *password;
char *client;
char *api_version;
} SonicConfigStruct;
static SonicConfigStruct SONIC_CONFIG;
/**
* \brief initialise Sonic configuration struct
*/
void
sonic_config_init(const char *server, const char *username,
const char *password)
{
SONIC_CONFIG.server = strndup(server, MAX_PATH_LEN);
/*
* Correct for the extra '/'
*/
size_t server_url_len = strnlen(SONIC_CONFIG.server, MAX_PATH_LEN) - 1;
if (SONIC_CONFIG.server[server_url_len] == '/') {
SONIC_CONFIG.server[server_url_len] = '\0';
}
SONIC_CONFIG.username = strndup(username, MAX_FILENAME_LEN);
SONIC_CONFIG.password = strndup(password, MAX_FILENAME_LEN);
SONIC_CONFIG.client = DEFAULT_USER_AGENT;
if (!CONFIG.sonic_insecure) {
/*
* API 1.13.0 is the minimum version that supports
* salt authentication scheme
*/
SONIC_CONFIG.api_version = "1.13.0";
} else {
/*
* API 1.8.0 is the minimum version that supports ID3 mode
*/
SONIC_CONFIG.api_version = "1.8.0";
}
}
/**
* \brief generate authentication string
*/
static char *sonic_gen_auth_str(void)
{
if (!CONFIG.sonic_insecure) {
char *salt = generate_salt();
size_t pwd_len = strnlen(SONIC_CONFIG.password, MAX_FILENAME_LEN);
size_t pwd_salt_len = pwd_len + strnlen(salt, MAX_FILENAME_LEN);
char *pwd_salt = CALLOC(pwd_salt_len + 1, sizeof(char));
strncat(pwd_salt, SONIC_CONFIG.password, MAX_FILENAME_LEN);
strncat(pwd_salt + pwd_len, salt, MAX_FILENAME_LEN);
char *token = generate_md5sum(pwd_salt);
char *auth_str = CALLOC(MAX_PATH_LEN + 1, sizeof(char));
snprintf(auth_str, MAX_PATH_LEN,
".view?u=%s&t=%s&s=%s&v=%s&c=%s",
SONIC_CONFIG.username, token, salt,
SONIC_CONFIG.api_version, SONIC_CONFIG.client);
FREE(salt);
FREE(token);
return auth_str;
} else {
char *pwd_hex = str_to_hex(SONIC_CONFIG.password);
char *auth_str = CALLOC(MAX_PATH_LEN + 1, sizeof(char));
snprintf(auth_str, MAX_PATH_LEN,
".view?u=%s&p=enc:%s&v=%s&c=%s",
SONIC_CONFIG.username, pwd_hex,
SONIC_CONFIG.api_version, SONIC_CONFIG.client);
FREE(pwd_hex);
return auth_str;
}
}
/**
* \brief generate the first half of the request URL
*/
static char *sonic_gen_url_first_part(char *method)
{
char *auth_str = sonic_gen_auth_str();
char *url = CALLOC(MAX_PATH_LEN + 1, sizeof(char));
snprintf(url, MAX_PATH_LEN, "%s/rest/%s%s", SONIC_CONFIG.server,
method, auth_str);
FREE(auth_str);
return url;
}
/**
* \brief generate a getMusicDirectory request URL
*/
static char *sonic_getMusicDirectory_link(const char *id)
{
char *first_part = sonic_gen_url_first_part("getMusicDirectory");
char *url = CALLOC(MAX_PATH_LEN + 1, sizeof(char));
snprintf(url, MAX_PATH_LEN, "%s&id=%s", first_part, id);
FREE(first_part);
return url;
}
/**
* \brief generate a getArtist request URL
*/
static char *sonic_getArtist_link(const char *id)
{
char *first_part = sonic_gen_url_first_part("getArtist");
char *url = CALLOC(MAX_PATH_LEN + 1, sizeof(char));
snprintf(url, MAX_PATH_LEN, "%s&id=%s", first_part, id);
FREE(first_part);
return url;
}
/**
* \brief generate a getAlbum request URL
*/
static char *sonic_getAlbum_link(const char *id)
{
char *first_part = sonic_gen_url_first_part("getAlbum");
char *url = CALLOC(MAX_PATH_LEN + 1, sizeof(char));
snprintf(url, MAX_PATH_LEN, "%s&id=%s", first_part, id);
FREE(first_part);
return url;
}
/**
* \brief generate a download request URL
*/
static char *sonic_stream_link(const char *id)
{
char *first_part = sonic_gen_url_first_part("stream");
char *url = CALLOC(MAX_PATH_LEN + 1, sizeof(char));
snprintf(url, MAX_PATH_LEN, "%s&format=raw&id=%s", first_part, id);
FREE(first_part);
return url;
}
/**
* \brief The parser for Sonic index mode
* \details This is the callback function called by the the XML parser.
* \param[in] data user supplied data, in this case it is the pointer to the
* LinkTable.
* \param[in] elem the name of this element, it should be either "child" or
* "artist"
* \param[in] attr Each attribute seen in a start (or empty) tag occupies
* 2 consecutive places in this vector: the attribute name followed by the
* attribute value. These pairs are terminated by a null pointer.
* \note we are using strcmp rather than strncmp, because we are assuming the
* parser terminates the strings properly, which is a fair assumption,
* considering how mature expat is.
*/
static void XMLCALL
XML_parser_general(void *data, const char *elem, const char **attr)
{
/*
* Error checking
*/
if (!strcmp(elem, "error")) {
lprintf(error, "error:\n");
for (int i = 0; attr[i]; i += 2) {
lprintf(error, "%s: %s\n", attr[i], attr[i + 1]);
}
}
LinkTable *linktbl = (LinkTable *) data;
Link *link;
/*
* Please refer to the documentation at the function prototype of
* sonic_LinkTable_new_id3()
*/
if (!strcmp(elem, "child")) {
link = CALLOC(1, sizeof(Link));
/*
* Initialise to LINK_DIR, as the LINK_FILE is set later.
*/
link->type = LINK_DIR;
} else if (!strcmp(elem, "artist")
&& linktbl->links[0]->sonic.depth != 3) {
/*
* We want to skip the first "artist" element in the album table
*/
link = CALLOC(1, sizeof(Link));
link->type = LINK_DIR;
} else if (!strcmp(elem, "album")
&& linktbl->links[0]->sonic.depth == 3) {
link = CALLOC(1, sizeof(Link));
link->type = LINK_DIR;
/*
* The new table should be a level 4 song table
*/
link->sonic.depth = 4;
} else if (!strcmp(elem, "song")
&& linktbl->links[0]->sonic.depth == 4) {
link = CALLOC(1, sizeof(Link));
link->type = LINK_FILE;
} else {
/*
* The element does not contain directory structural information
*/
return;
}
int id_set = 0;
int linkname_set = 0;
int track = 0;
char *title = "";
char *suffix = "";
for (int i = 0; attr[i]; i += 2) {
if (!strcmp("id", attr[i])) {
link->sonic.id = CALLOC(MAX_FILENAME_LEN + 1, sizeof(char));
strncpy(link->sonic.id, attr[i + 1], MAX_FILENAME_LEN);
id_set = 1;
continue;
}
if (!strcmp("path", attr[i])) {
memset(link->linkname, 0, MAX_FILENAME_LEN);
/*
* Skip to the last '/' if it exists
*/
char *s = strrchr(attr[i + 1], '/');
if (s) {
strncpy(link->linkname, s + 1, MAX_FILENAME_LEN);
} else {
strncpy(link->linkname, attr[i + 1], MAX_FILENAME_LEN);
}
linkname_set = 1;
continue;
}
/*
* "title" is used for directory name,
* "name" is for top level directories
* N.B. "path" attribute is given the preference
*/
if (!linkname_set) {
if (!strcmp("title", attr[i])
|| !strcmp("name", attr[i])) {
strncpy(link->linkname, attr[i + 1], MAX_FILENAME_LEN);
linkname_set = 1;
continue;
}
}
if (!strcmp("isDir", attr[i])) {
if (!strcmp("false", attr[i + 1])) {
link->type = LINK_FILE;
}
continue;
}
if (!strcmp("created", attr[i])) {
struct tm *tm = CALLOC(1, sizeof(struct tm));
strptime(attr[i + 1], "%Y-%m-%dT%H:%M:%S.000Z", tm);
link->time = mktime(tm);
FREE(tm);
continue;
}
if (!strcmp("size", attr[i])) {
link->content_length = atoll(attr[i + 1]);
continue;
}
if (!strcmp("track", attr[i])) {
track = atoi(attr[i + 1]);
continue;
}
if (!strcmp("title", attr[i])) {
title = (char *) attr[i + 1];
continue;
}
if (!strcmp("suffix", attr[i])) {
suffix = (char *) attr[i + 1];
continue;
}
}
if (!linkname_set && strnlen(title, MAX_PATH_LEN) > 0 &&
strnlen(suffix, MAX_PATH_LEN) > 0) {
snprintf(link->linkname, MAX_FILENAME_LEN, "%02d - %s.%s",
track, title, suffix);
linkname_set = 1;
}
/*
* Clean up if linkname or id is not set
*/
if (!linkname_set || !id_set) {
FREE(link);
return;
}
if (link->type == LINK_FILE) {
char *url = sonic_stream_link(link->sonic.id);
strncpy(link->f_url, url, MAX_PATH_LEN);
FREE(url);
}
LinkTable_add(linktbl, link);
}
static void sanitise_LinkTable(LinkTable *linktbl)
{
for (int i = 0; i < linktbl->num; i++) {
if (!strcmp(linktbl->links[i]->linkname, ".")) {
/* Note the super long sanitised name to avoid collision */
strcpy(linktbl->links[i]->linkname, "__DOT__");
}
if (!strcmp(linktbl->links[i]->linkname, "/")) {
/* Ditto */
strcpy(linktbl->links[i]->linkname, "__FORWARD-SLASH__");
}
for (size_t j = 0; j < strlen(linktbl->links[i]->linkname); j++) {
if (linktbl->links[i]->linkname[j] == '/') {
linktbl->links[i]->linkname[j] = '-';
}
}
if (linktbl->links[i]->next_table != NULL) {
sanitise_LinkTable(linktbl->links[i]->next_table);
}
}
}
/**
* \brief parse a XML string in order to fill in the LinkTable
*/
static LinkTable *sonic_url_to_LinkTable(const char *url,
XML_StartElementHandler handler, int depth)
{
LinkTable *linktbl = LinkTable_alloc(url);
linktbl->links[0]->sonic.depth = depth;
/*
* start downloading the base URL
*/
TransferStruct xml = Link_download_full(linktbl->links[0]);
if (xml.curr_size == 0) {
LinkTable_free(linktbl);
return NULL;
}
XML_Parser parser = XML_ParserCreate(NULL);
XML_SetUserData(parser, linktbl);
XML_SetStartElementHandler(parser, handler);
if (XML_Parse(parser, xml.data, xml.curr_size, 1) == XML_STATUS_ERROR) {
lprintf(error,
"Parse error at line %lu: %s\n",
XML_GetCurrentLineNumber(parser),
XML_ErrorString(XML_GetErrorCode(parser)));
}
XML_ParserFree(parser);
FREE(xml.data);
LinkTable_print(linktbl);
sanitise_LinkTable(linktbl);
return linktbl;
}
LinkTable *sonic_LinkTable_new_index(const char *id)
{
char *url;
if (strcmp(id, "0")) {
url = sonic_getMusicDirectory_link(id);
} else {
url = sonic_gen_url_first_part("getIndexes");
}
LinkTable *linktbl =
sonic_url_to_LinkTable(url, XML_parser_general, 0);
FREE(url);
return linktbl;
}
static void XMLCALL
XML_parser_id3_root(void *data, const char *elem, const char **attr)
{
if (!strcmp(elem, "error")) {
lprintf(error, "\n");
for (int i = 0; attr[i]; i += 2) {
lprintf(error, "%s: %s\n", attr[i], attr[i + 1]);
}
}
LinkTable *root_linktbl = (LinkTable *) data;
LinkTable *this_linktbl = NULL;
/*
* Set the current linktbl, if we have more than head link.
*/
if (root_linktbl->num > 1) {
this_linktbl =
root_linktbl->links[root_linktbl->num - 1]->next_table;
}
int id_set = 0;
int linkname_set = 0;
Link *link;
if (!strcmp(elem, "index")) {
/*
* Add a subdirectory
*/
link = CALLOC(1, sizeof(Link));
link->type = LINK_DIR;
for (int i = 0; attr[i]; i += 2) {
if (!strcmp("name", attr[i])) {
strncpy(link->linkname, attr[i + 1], MAX_FILENAME_LEN);
linkname_set = 1;
/*
* Allocate a new LinkTable
*/
link->next_table = LinkTable_alloc("/");
}
}
/*
* Make sure we don't add an empty directory
*/
if (linkname_set) {
LinkTable_add(root_linktbl, link);
} else {
FREE(link);
}
return;
} else if (!strcmp(elem, "artist")) {
link = CALLOC(1, sizeof(Link));
link->type = LINK_DIR;
/*
* The new table should be a level 3 album table
*/
link->sonic.depth = 3;
for (int i = 0; attr[i]; i += 2) {
if (!strcmp("name", attr[i])) {
strncpy(link->linkname, attr[i + 1], MAX_FILENAME_LEN);
linkname_set = 1;
continue;
}
if (!strcmp("id", attr[i])) {
link->sonic.id =
CALLOC(MAX_FILENAME_LEN + 1, sizeof(char));
strncpy(link->sonic.id, attr[i + 1], MAX_FILENAME_LEN);
id_set = 1;
continue;
}
}
/*
* Clean up if linkname is not set
*/
if (!linkname_set || !id_set) {
FREE(link);
return;
}
LinkTable_add(this_linktbl, link);
}
/*
* If we reach here, then this element does not contain directory structural
* information
*/
}
LinkTable *sonic_LinkTable_new_id3(int depth, const char *id)
{
char *url;
LinkTable *linktbl = ROOT_LINK_TBL;
switch (depth) {
/*
* Root table
*/
case 0:
url = sonic_gen_url_first_part("getArtists");
linktbl = sonic_url_to_LinkTable(url, XML_parser_id3_root, 0);
FREE(url);
break;
/*
* Album table - get all the albums of an artist
*/
case 3:
url = sonic_getArtist_link(id);
linktbl = sonic_url_to_LinkTable(url, XML_parser_general, depth);
FREE(url);
break;
/*
* Song table - get all the songs of an album
*/
case 4:
url = sonic_getAlbum_link(id);
linktbl = sonic_url_to_LinkTable(url, XML_parser_general, depth);
FREE(url);
break;
default:
/*
* We shouldn't reach here.
*/
lprintf(fatal, "case %d.\n", depth);
break;
}
return linktbl;
}

54
src/sonic.h Normal file
View File

@ -0,0 +1,54 @@
#ifndef SONIC_H
#define SONIC_H
/**
* \file sonic.h
* \brief Sonic related function
*/
typedef struct {
/**
* \brief Sonic id field
* \details This is used to store the following:
* - Arist ID
* - Album ID
* - Song ID
* - Sub-directory ID (in the XML response, this is the ID on the "child"
* element)
*/
char *id;
/**
* \brief Sonic directory depth
* \details This is used exclusively in ID3 mode to store the depth of the
* current directory.
*/
int depth;
} Sonic;
#include "link.h"
/**
* \brief Initialise Sonic configuration.
*/
void sonic_config_init(const char *server, const char *username,
const char *password);
/**
* \brief Create a new Sonic LinkTable in index mode
*/
LinkTable *sonic_LinkTable_new_index(const char *id);
/**
* \brief Create a new Sonic LinkTable in ID3 mode
* \details In this mode, the filesystem effectively has 5 levels of which are:
* 0. Root table
* 1. Index table
* 2. Artist table
* 3. Album table
* 4. Song table
* 5. Individual song (not a table)
* \param[in] depth the level of the requested table
* \param[in] id the id of the requested table
*/
LinkTable *sonic_LinkTable_new_id3(int depth, const char *id);
#endif

View File

@ -1,24 +1,46 @@
#include "util.h"
#include <stdio.h>
#include "config.h"
#include "log.h"
#include <openssl/md5.h>
#include <uuid/uuid.h>
#include <errno.h>
#include <execinfo.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
/**
* \brief Backtrace buffer size
*/
#define BT_BUF_SIZE 100
/**
* \brief The length of a MD5SUM string
*/
#define MD5_HASH_LEN 32
/**
* \brief The length of the salt
* \details This is basically the length of a UUID
*/
#define SALT_LEN 36
char *path_append(const char *path, const char *filename)
{
int needs_separator = 0;
if ((path[strnlen(path, MAX_PATH_LEN)-1] != '/') && (filename[0] != '/')) {
if ((path[strnlen(path, MAX_PATH_LEN) - 1] != '/')
&& (filename[0] != '/')) {
needs_separator = 1;
}
char *str;
size_t ul = strnlen(path, MAX_PATH_LEN);
size_t sl = strnlen(filename, MAX_FILENAME_LEN);
str = calloc(ul + sl + needs_separator + 1, sizeof(char));
if (!str) {
fprintf(stderr, "path_append(): calloc failure!\n");
exit(EXIT_FAILURE);
}
str = CALLOC(ul + sl + needs_separator + 1, sizeof(char));
strncpy(str, path, ul);
if (needs_separator) {
str[ul] = '/';
@ -31,3 +53,105 @@ int64_t round_div(int64_t a, int64_t b)
{
return (a + (b / 2)) / b;
}
void PTHREAD_MUTEX_UNLOCK(pthread_mutex_t *x)
{
int i;
i = pthread_mutex_unlock(x);
if (i) {
lprintf(fatal,
"thread %x: %d, %s\n", pthread_self(), i, strerror(i));
}
}
void PTHREAD_MUTEX_LOCK(pthread_mutex_t *x)
{
int i;
i = pthread_mutex_lock(x);
if (i) {
lprintf(fatal,
"thread %x: %d, %s\n", pthread_self(), i, strerror(i));
}
}
void exit_failure(void)
{
int nptrs;
void *buffer[BT_BUF_SIZE];
nptrs = backtrace(buffer, BT_BUF_SIZE);
fprintf(stderr, "\nOops! HTTPDirFS crashed! :(\n");
fprintf(stderr, "backtrace() returned the following %d addresses:\n",
nptrs);
backtrace_symbols_fd(buffer, nptrs, STDERR_FILENO);
exit(EXIT_FAILURE);
}
void erase_string(FILE *file, size_t max_len, char *s)
{
size_t l = strnlen(s, max_len);
for (size_t k = 0; k < l; k++) {
fprintf(file, "\b");
}
for (size_t k = 0; k < l; k++) {
fprintf(file, " ");
}
for (size_t k = 0; k < l; k++) {
fprintf(file, "\b");
}
}
char *generate_salt(void)
{
char *out;
out = CALLOC(SALT_LEN + 1, sizeof(char));
uuid_t uu;
uuid_generate(uu);
uuid_unparse(uu, out);
return out;
}
char *generate_md5sum(const char *str)
{
MD5_CTX c;
unsigned char md5[MD5_DIGEST_LENGTH];
size_t len = strnlen(str, MAX_PATH_LEN);
char *out = CALLOC(MD5_HASH_LEN + 1, sizeof(char));
MD5_Init(&c);
MD5_Update(&c, str, len);
MD5_Final(md5, &c);
for (int i = 0; i < MD5_DIGEST_LENGTH; i++) {
sprintf(out + 2 * i, "%02x", md5[i]);
}
return out;
}
void *CALLOC(size_t nmemb, size_t size)
{
void *ptr = calloc(nmemb, size);
if (!ptr) {
lprintf(fatal, "%s!\n", strerror(errno));
}
return ptr;
}
void FREE(void *ptr)
{
if (ptr) {
free(ptr);
} else {
lprintf(fatal, "attempted to free NULL ptr!\n");
}
}
char *str_to_hex(char *s)
{
char *hex = CALLOC(strnlen(s, MAX_PATH_LEN) * 2 + 1, sizeof(char));
for (char *c = s, *h = hex; *c; c++, h += 2) {
sprintf(h, "%x", *c);
}
return hex;
}

View File

@ -1,28 +1,19 @@
#ifndef UTIL_H
#define UTIL_H
#include <stdint.h>
/**
* \file util.h
* \brief utility functions
*/
/**
* \brief the maximum length of a path and a URL.
* \details This corresponds the maximum path length under Ext4.
*/
#define MAX_PATH_LEN 4096
/** \brief the maximum length of a filename. */
#define MAX_FILENAME_LEN 255
#include <pthread.h>
#include <stdint.h>
#include <stdio.h>
/**
* \brief append a path
* \details This function appends a path with the next level, while taking the
* trailing slash of the upper level into account.
*
* Please free the char * after use.
* \note You need to free the char * after use.
*/
char *path_append(const char *path, const char *filename);
@ -31,5 +22,58 @@ char *path_append(const char *path, const char *filename);
*/
int64_t round_div(int64_t a, int64_t b);
/**
* \brief wrapper for pthread_mutex_lock(), with error handling
*/
void PTHREAD_MUTEX_LOCK(pthread_mutex_t *x);
/**
* \brief wrapper for pthread_mutex_unlock(), with error handling
*/
void PTHREAD_MUTEX_UNLOCK(pthread_mutex_t *x);
/**
* \brief wrapper for exit(EXIT_FAILURE), with error handling
*/
void exit_failure(void);
/**
* \brief erase a string from the terminal
*/
void erase_string(FILE *file, size_t max_len, char *s);
/**
* \brief generate the salt for authentication string
* \details this effectively generates a UUID string, which we use as the salt
* \return a pointer to a 37-char array with the salt.
*/
char *generate_salt(void);
/**
* \brief generate the md5sum of a string
* \param[in] str a character array for the input string
* \return a pointer to a 33-char array with the salt
*/
char *generate_md5sum(const char *str);
/**
* \brief wrapper for calloc(), with error handling
*/
void *CALLOC(size_t nmemb, size_t size);
/**
* \brief wrapper for free(), but the pointer is set to NULL afterwards.
*/
void FREE(void *ptr);
/**
* \brief Convert a string to hex
*/
char *str_to_hex(char *s);
/**
* \brief initialise the configuration data structure
*/
void Config_init(void);
#endif