Compare commits

...

55 Commits

Author SHA1 Message Date
Fufu Fang 07475660f1
updated LinkTable invalidation 2024-05-11 23:15:32 +01:00
Fufu Fang a5a53442b2
updated the description for refreshing directories 2024-05-11 17:28:23 +01:00
Fufu Fang 9e383ad7a3
Moved linktable freshness check around
Fixed https://github.com/fangfufu/httpdirfs/issues/141
2024-05-11 01:53:30 +01:00
Fufu Fang 127f2194d0
added more comments 2024-05-07 01:44:01 +01:00
Fufu Fang 4fb95ee5a0
attempt to fix codeql 2024-05-06 00:47:22 +01:00
Fufu Fang 720db5aafa
fixed cache system for percentage encoded file in single-file mode 2024-05-06 00:12:03 +01:00
Fufu Fang 28293b5ccd
fixed erroneous error check 2024-05-05 03:14:51 +01:00
Fufu Fang 1a20318654
added more debug statements 2024-05-05 02:55:10 +01:00
Fufu Fang 9a7eabd170
modified debug message 2024-05-05 02:04:31 +01:00
Fufu Fang 01fd2e9559
changed the way debug level works 2024-05-05 02:00:46 +01:00
Fufu Fang be666d72e9
removed semi-colon at the end of a macro 2024-05-05 00:32:00 +01:00
Fufu Fang 1fa3830dec
run through the formatter 2024-05-03 07:39:14 +01:00
Fufu Fang 8aa7c570c8
added a todo note 2024-05-03 07:37:44 +01:00
Fufu Fang 389a657170
improved debug message 2024-05-03 07:33:41 +01:00
Fufu Fang 257bb22e80
Merge branch 'master' into debug 2024-05-03 07:20:08 +01:00
Fufu Fang a299819b7d
fixed a memory leak, improved error handling in cache system 2024-05-03 07:19:24 +01:00
Fufu Fang 3e7d9f0294
start labelling what might be wrong. 2024-05-03 06:44:59 +01:00
Fufu Fang 63455c54cc
initial commit to the debug branch 2024-05-03 06:44:33 +01:00
Fufu Fang d4c7d8c92a
added more debug message 2024-05-03 06:44:01 +01:00
Fufu Fang dfc83d0e1c
improved debug message 2024-05-03 06:24:50 +01:00
Fufu Fang 96a7c248d3
improved debug message 2024-05-03 05:59:09 +01:00
Fufu Fang f92fe4232a
attempt to fix codeQL 2024-05-02 07:07:58 +01:00
Fufu Fang 91351689f1
LinkTable now saves the refresh time 2024-05-02 06:59:22 +01:00
Fufu Fang 1a3f36a92c
Corrected an implementation error and added more comments 2024-05-02 04:45:34 +01:00
Fufu Fang d6d4af0c8c
Update README.md
Fix https://github.com/fangfufu/httpdirfs/issues/136
2024-04-20 01:30:52 +01:00
Fufu Fang f48ee93931
Update README.md 2024-02-01 09:58:05 +00:00
Fufu Fang 983b1edfbd
Updated README 2024-02-01 06:28:36 +00:00
Fufu Fang 707d9b9253
Configure online code scanning tools
- Added .deepsource.toml for Deep Source
- Added configuration for GitHub CodeQL
2024-02-01 02:53:26 +00:00
Fufu Fang 81aac8bb57
fixed spelling, ran through the formatter 2024-01-13 12:31:47 +00:00
Mattias Runge-Broberg 35a213942c
Fix for single file mode not working
- Fix for not sending ranges which exceed the content-length which will result
in an error.
- Fix for byte range being set to 1 byte too large, it should be the end index,
not the size as described in
https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
2024-01-13 12:30:52 +00:00
Fufu Fang 595c6d275e
Remove spurious code
Remote spurious code flagged by 8451da6ac7,
which was introduced by e76b079fe6
Closes https://github.com/fangfufu/httpdirfs/issues/124
2023-10-03 23:10:24 +01:00
chrysn bd33966337 Allow leading `./` segments in links 2023-10-02 23:44:18 +01:00
Jonathan Kamens 29c3eb8f67 Convert build process to use autotools (autoconf, automake, etc.)
This commit converts the build process from a hand-written Makefile
that works on Linux, FreeBSD, and macOS, to an automatically generated
Makefile managed by the autotools toolset.

This incldues:

* Add the compile, config.guess, config.sub, depcomp, install-sh, and
  missing helper scripts that autotools requires to be shipped with
  the package in order for configure to work.
* Rename Makefile to Makefile.am and restructure it for compatibility
  with autotools and specifically with the stuff in our configure
  script.
* Create the configure.ac source file which is turned into the
  configure script.
* Rename Doxyfile to Doxyfile.in so that the source directories can be
  substituted into it at configure time.
* Tweak .gitignore to ignore temporary and output files related to
  autotools.
* Generate Makefile.in, aclocal.m4, and configure using `autoreconf`
  and include them as checked-in source files.

While I can't fully document how autotools works here the basic
workflow is that when you need to make changes to the build, you
update Makefile.am and/or configure.ac as needed, run `autoreconf`,
and commit the changes you made as well as any resulting changes to
Makefile.in, aclocal.m4, and configure. Makefile should _not_ be
committed into the source tree; it should always be generated using
configure on the system where the build is being run.
2023-09-29 23:45:47 +01:00
Jonathan Kamens ed93a133df Fix minor logic bug and code smell in make_link_relative
Don't assume that the reason why we didn't find enough slashes in a
URL is because the user didn't specify the slash at the end of the
host name, unless we did find the first two slashes.

Add some curly braces around an if block to make it clear to people
and the compiler which statement an `else` applies to. The logic was
correct before but the indentation was wrong, making it especially
confusing.
2023-09-29 23:45:47 +01:00
Jonathan Kamens 7bcd43068d Fix broken curl HTTP response code check
The check for the HTTP response code from the curl library was written
incorrectly and guaranteed to always fail. I've fixed the logic to
reflect what I believe was intended.
2023-09-29 23:45:47 +01:00
Jonathan Kamens ab49ca76b6 Add missing return value check for fread call 2023-09-29 23:45:47 +01:00
Jonathan Kamens 8451da6ac7 Comment out small block of code that doesn't do anything
There's a small block of code that calls strnlen on a string, saves
the esult in a variable, conditionally decrements the variable, and
then does nothing with it, making the entire block of code a no-op.

I don't want to just remove it entirely since it's possible that there
was intended to be some sort of check here that was inadvertently
omitted. So to make the compiler stop complaining I've commented out
the code, but I've left a comment above it explaining why it was
commented out and pointing out that maybe something different needs to
be done with it.
2023-09-29 23:45:47 +01:00
Jonathan Kamens e253b4a9ee Eliminate some compiler warnings 2023-09-29 23:45:47 +01:00
Jonathan Kamens 8f0ef158c0 Remove spurious arguments to print_version() 2023-09-29 23:45:47 +01:00
Jonathan Kamens c532661d29 Add missing error-checking for return value of fread
Several calls to fread were missing checks to ensure that the expected
amount of data was read.
2023-09-29 23:45:47 +01:00
Jonathan Kamens 7363adaf12 Handle sites that put unencoded characters in URLs that curl dislikes
Some sites put unencoded characters in their href attributes that
really should be encoded, most notably spaces. Curl won't accept a URL
with a space in it, and perhaps other such characters as well. Address
this by properly encoding characters in URLs before feeding them to
Curl.
2023-09-29 12:47:55 +01:00
Jonathan Kamens e94b5441f3 Add a few more debug messages to help trace program execution 2023-09-29 12:47:55 +01:00
Jonathan Kamens 3beccd2c2d Enabling debugging on command line should enable debug logging
I believe an appropriate expectation is that if the user enables
debugging with a command-line flag, then that should also enable
messagse designated as debug messages in the code to be printed.
2023-09-29 12:47:55 +01:00
Jonathan Kamens 4d323b846f Do the right thing with sites that use absolute links
On some sites, the link to each subfolder is an absolute link rather
than a relative one. To accommodate this, convert the links from
absolute to relative before storing them in the link table.
2023-09-29 12:47:55 +01:00
Jonathan Kamens 41cb4b80bc Do the right thing with sites that require the final slash
Some web sites will return 404 if you fetch a directory without the
final slash. For example, https://archive.mozilla.org/pub/ works,
https://archive.mozilla.org/pub does not. We need to do two things to
accommodate this:

* When processing the root URL of the filesystem, instead of stripping
  off the final slash, just set the offset to ignore it.
* In the link structure, store the actual URL tail of the link
  separately from its name, final slash and all if there is one, and
  append that instead of the name when constructing the URL for curl.
2023-09-29 12:47:55 +01:00
Fufu Fang 1e80844831 ran the code through formatter 2023-07-26 07:48:33 +08:00
Fufu Fang 6d8db94458 minor formatting changes for PR #114 2023-07-26 07:48:22 +08:00
Fufu Fang 282605b0ac fix: changed deprecated libcurl call 2023-07-25 14:57:08 +08:00
Mike Morrison a309994b9e
Add setting to refresh directory contents (#114)
Refresh a directory's contents when fs_readdir is called
if it has been more than the number of seconds specified by
--refresh_timeout since the directory was last indexed.
2023-03-31 13:26:15 +01:00
Kian-Meng Ang 9a7016f29b
Fix typos (#117)
Found via `codespell`
2023-03-28 05:00:07 +01:00
Fufu Fang 8479feb2f6
Bumped version number to 1.2.5 for Debian release 2023-02-24 19:47:23 +00:00
Fufu Fang fe45afc6a1
Remove the usage of UBSAN
Address issue #113. Use of UBSAN in runtime could introduce
vulnerabilities.

Original bug report:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1031744

Reference:
https://www.openwall.com/lists/oss-security/2016/02/17/9
2023-02-23 01:44:18 +00:00
Jérôme Charaoui e9f60d5221
fix typo 2023-01-28 12:02:31 -05:00
Jérôme Charaoui 74fac1dce0
bump VERSION in Makefile 2023-01-28 12:01:06 -05:00
Fufu Fang 9b72f97bcf
Update README.md 2023-01-14 00:04:12 +00:00
29 changed files with 14493 additions and 294 deletions

4
.deepsource.toml Normal file
View File

@ -0,0 +1,4 @@
version = 1
[[analyzers]]
name = "cxx"

91
.github/workflows/codeql.yml vendored Normal file
View File

@ -0,0 +1,91 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
schedule:
- cron: '18 19 * * 1'
jobs:
analyze:
name: Analyze
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners
# Consider using larger runners for possible analysis time improvements.
runs-on: 'ubuntu-latest'
timeout-minutes: 360
permissions:
# required for all workflows
security-events: write
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
language: [ 'c-cpp' ]
# CodeQL supports [ 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift' ]
# Use only 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use only 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libgumbo-dev libfuse-dev libssl-dev \
libcurl4-openssl-dev uuid-dev help2man libexpat1-dev pkg-config \
autoconf
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

12
.gitignore vendored
View File

@ -14,5 +14,17 @@ doc/html
*.c~
*.h~
# autotools
autom4te.cache
#Others
mnt
# Generated files
Doxyfile
Makefile
config.log
config.status
doc
src/.deps
src/.dirstamp

View File

@ -5,6 +5,25 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Fixed
- The refreshed LinkTable is now saved
(https://github.com/fangfufu/httpdirfs/issues/141).
- Only one LinkTable of the same directory is created when the cache mode is
enabled (https://github.com/fangfufu/httpdirfs/issues/140).
- Cache mode noe works correctly witht escaped URL
(https://github.com/fangfufu/httpdirfs/issues/138).
## Changed
- Improved LinkTable caching. LinkTable invalidation is now purely based on
timeout.
## [1.2.5] - 2023-02-24
### Fixed
- No longer compile with UBSAN enabled by default to avoid introducing
security vulnerability.
## [1.2.4] - 2023-01-11
### Added
@ -211,7 +230,8 @@ ${XDG_CONFIG_HOME}/httpdirfs, rather than ${HOME}/.httpdirfs
## [1.0] - 2018-08-22
- Initial release, everything works correctly, as far as I know.
[Unreleased]: https://github.com/fangfufu/httpdirfs/compare/1.2.4...master
[Unreleased]: https://github.com/fangfufu/httpdirfs/compare/1.2.5...master
[1.2.5]: https://github.com/fangfufu/httpdirfs/compare/1.2.4...1.2.5
[1.2.4]: https://github.com/fangfufu/httpdirfs/compare/1.2.3...1.2.4
[1.2.3]: https://github.com/fangfufu/httpdirfs/compare/1.2.2...1.2.3
[1.2.2]: https://github.com/fangfufu/httpdirfs/compare/1.2.1...1.2.2

View File

@ -790,8 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = . \
src
INPUT = @srcdir@ @srcdir@/src
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

View File

@ -1,99 +0,0 @@
VERSION = 1.2.3
CFLAGS += -g -O2 -Wall -Wextra -Wshadow \
-fsanitize=undefined -fanalyzer -Wno-analyzer-file-leak \
-rdynamic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DVERSION=\"$(VERSION)\"\
`pkg-config --cflags-only-I gumbo libcurl fuse uuid expat`
LDFLAGS += `pkg-config --libs-only-L gumbo libcurl fuse uuid expat`
LIBS = -pthread -lgumbo -lcurl -lfuse -lcrypto -lexpat
COBJS = main.o network.o fuse_local.o link.o cache.o util.o sonic.o log.o\
config.o memcache.o
OS := $(shell uname)
ifeq ($(OS),Darwin)
BREW_PREFIX := $(shell brew --prefix)
CFLAGS += -I$(BREW_PREFIX)/opt/openssl/include \
-I$(BREW_PREFIX)/opt/curl/include
LDFLAGS += -L$(BREW_PREFIX)/opt/openssl/lib \
-L$(BREW_PREFIX)/opt/curl/lib
else
LIBS += -luuid
endif
ifeq ($(OS),FreeBSD)
LIBS += -lexecinfo
endif
prefix ?= /usr/local
all: httpdirfs
%.o: src/%.c
$(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
httpdirfs: $(COBJS)
$(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
install:
ifeq ($(OS),Linux)
install -m 755 -D httpdirfs \
$(DESTDIR)$(prefix)/bin/httpdirfs
install -m 644 -D doc/man/httpdirfs.1 \
$(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
ifeq ($(OS),FreeBSD)
install -m 755 httpdirfs \
$(DESTDIR)$(prefix)/bin/httpdirfs
gzip -f -k doc/man/httpdirfs.1
install -m 644 doc/man/httpdirfs.1.gz \
$(DESTDIR)$(prefix)/man/man1/httpdirfs.1.gz
endif
ifeq ($(OS),Darwin)
install -d $(DESTDIR)$(prefix)/bin
install -m 755 httpdirfs \
$(DESTDIR)$(prefix)/bin/httpdirfs
install -d $(DESTDIR)$(prefix)/share/man/man1
install -m 644 doc/man/httpdirfs.1 \
$(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
man: httpdirfs
mkdir -p doc/man
help2man --name "mount HTTP directory as a virtual filesystem" \
--no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
astyle --style=kr --align-pointer=name --max-code-length=80 src/*.c src/*.h
clean:
-rm -f src/*.h~
-rm -f src/*.c~
-rm -f src/*orig
-rm -f *.o
-rm -f httpdirfs
distclean: clean
-rm -rf doc/html
-rm -rf doc/man/httpdirfs.1
uninstall:
-rm -f $(DESTDIR)$(prefix)/bin/httpdirfs
ifeq ($(OS),Linux)
-rm -f $(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
ifeq ($(OS),FreeBSD)
-rm -f $(DESTDIR)$(prefix)/man/man1/httpdirfs.1.gz
endif
ifeq ($(OS),Darwin)
-rm -f $(DESTDIR)$(prefix)/share/man/man1/httpdirfs.1
endif
depend: .depend
.depend: src/*.c
rm -f ./.depend
$(CC) $(CFLAGS) -MM $^ -MF ./.depend;
include .depend
.PHONY: all man doc install clean distclean uninstall depend format

36
Makefile.am Normal file
View File

@ -0,0 +1,36 @@
bin_PROGRAMS = httpdirfs
httpdirfs_SOURCES = src/main.c src/network.c src/fuse_local.c src/link.c \
src/cache.c src/util.c src/sonic.c src/log.c src/config.c src/memcache.c
# This has $(fuse_LIBS) in it because there's a bug in the fuse pkgconf:
# it should add -pthread to CFLAGS but doesn't.
# $(NUCLA) is explained in configure.ac.
CFLAGS = -g -O2 -Wall -Wextra -Wshadow $(NUCLA) \
-rdynamic -D_GNU_SOURCE -DVERSION=\"$(VERSION)\"\
$(pkgconf_CFLAGS) $(fuse_CFLAGS) $(fuse_LIBS)
LIBS += $(pkgconf_LIBS) $(fuse_LIBS)
man_MANS = doc/man/httpdirfs.1
CLEANFILES = doc/man/*
DISTCLEANFILES = doc/html/*
# %.o: $(srcdir)/src/%.c
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
# httpdirfs: $(COBJS)
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
man: doc/man/httpdirfs.1
doc/man/httpdirfs.1: httpdirfs
mkdir -p doc/man
rm -f doc/man/httpdirfs.1.tmp
help2man --name "mount HTTP directory as a virtual filesystem" \
--no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1.tmp
mv doc/man/httpdirfs.1.tmp doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
astyle --style=kr --align-pointer=name --max-code-length=80 src/*.c src/*.h
.PHONY: man doc format

934
Makefile.in Normal file
View File

@ -0,0 +1,934 @@
# Makefile.in generated by automake 1.16.5 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2021 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
am__is_gnu_make = { \
if test -z '$(MAKELEVEL)'; then \
false; \
elif test -n '$(MAKE_HOST)'; then \
true; \
elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
true; \
else \
false; \
fi; \
}
am__make_running_with_option = \
case $${target_option-} in \
?) ;; \
*) echo "am__make_running_with_option: internal error: invalid" \
"target option '$${target_option-}' specified" >&2; \
exit 1;; \
esac; \
has_opt=no; \
sane_makeflags=$$MAKEFLAGS; \
if $(am__is_gnu_make); then \
sane_makeflags=$$MFLAGS; \
else \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
bs=\\; \
sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
| sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
esac; \
fi; \
skip_next=no; \
strip_trailopt () \
{ \
flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
}; \
for flg in $$sane_makeflags; do \
test $$skip_next = yes && { skip_next=no; continue; }; \
case $$flg in \
*=*|--*) continue;; \
-*I) strip_trailopt 'I'; skip_next=yes;; \
-*I?*) strip_trailopt 'I';; \
-*O) strip_trailopt 'O'; skip_next=yes;; \
-*O?*) strip_trailopt 'O';; \
-*l) strip_trailopt 'l'; skip_next=yes;; \
-*l?*) strip_trailopt 'l';; \
-[dEDm]) skip_next=yes;; \
-[JT]) skip_next=yes;; \
esac; \
case $$flg in \
*$$target_option*) has_opt=yes; break;; \
esac; \
done; \
test $$has_opt = yes
am__make_dryrun = (target_option=n; $(am__make_running_with_option))
am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
bin_PROGRAMS = httpdirfs$(EXEEXT)
subdir = .
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \
$(am__configure_deps) $(am__DIST_COMMON)
am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
configure.lineno config.status.lineno
mkinstalldirs = $(install_sh) -d
CONFIG_CLEAN_FILES = Doxyfile
CONFIG_CLEAN_VPATH_FILES =
am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)"
PROGRAMS = $(bin_PROGRAMS)
am__dirstamp = $(am__leading_dot)dirstamp
am_httpdirfs_OBJECTS = src/main.$(OBJEXT) src/network.$(OBJEXT) \
src/fuse_local.$(OBJEXT) src/link.$(OBJEXT) \
src/cache.$(OBJEXT) src/util.$(OBJEXT) src/sonic.$(OBJEXT) \
src/log.$(OBJEXT) src/config.$(OBJEXT) src/memcache.$(OBJEXT)
httpdirfs_OBJECTS = $(am_httpdirfs_OBJECTS)
httpdirfs_LDADD = $(LDADD)
AM_V_P = $(am__v_P_@AM_V@)
am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
am__v_P_0 = false
am__v_P_1 = :
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
am__v_GEN_1 =
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
am__v_at_1 =
DEFAULT_INCLUDES = -I.@am__isrc@
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__maybe_remake_depfiles = depfiles
am__depfiles_remade = src/$(DEPDIR)/cache.Po src/$(DEPDIR)/config.Po \
src/$(DEPDIR)/fuse_local.Po src/$(DEPDIR)/link.Po \
src/$(DEPDIR)/log.Po src/$(DEPDIR)/main.Po \
src/$(DEPDIR)/memcache.Po src/$(DEPDIR)/network.Po \
src/$(DEPDIR)/sonic.Po src/$(DEPDIR)/util.Po
am__mv = mv -f
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
AM_V_CC = $(am__v_CC_@AM_V@)
am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@)
am__v_CC_0 = @echo " CC " $@;
am__v_CC_1 =
CCLD = $(CC)
LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@
AM_V_CCLD = $(am__v_CCLD_@AM_V@)
am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@)
am__v_CCLD_0 = @echo " CCLD " $@;
am__v_CCLD_1 =
SOURCES = $(httpdirfs_SOURCES)
DIST_SOURCES = $(httpdirfs_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
am__install_max = 40
am__nobase_strip_setup = \
srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
am__nobase_strip = \
for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
am__nobase_list = $(am__nobase_strip_setup); \
for p in $$list; do echo "$$p $$p"; done | \
sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
$(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
if (++n[$$2] == $(am__install_max)) \
{ print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
END { for (dir in files) print dir, files[dir] }'
am__base_list = \
sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
am__uninstall_files_from_dir = { \
test -z "$$files" \
|| { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
|| { echo " ( cd '$$dir' && rm -f" $$files ")"; \
$(am__cd) "$$dir" && rm -f $$files; }; \
}
man1dir = $(mandir)/man1
NROFF = nroff
MANS = $(man_MANS)
am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
# Read a list of newline-separated strings from the standard input,
# and print each of them once, without duplicates. Input order is
# *not* preserved.
am__uniquify_input = $(AWK) '\
BEGIN { nonempty = 0; } \
{ items[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in items) print i; }; } \
'
# Make sure the list of sources is unique. This is necessary because,
# e.g., the same source file might be shared among _SOURCES variables
# for different programs/libraries.
am__define_uniq_tagged_files = \
list='$(am__tagged_files)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | $(am__uniquify_input)`
AM_RECURSIVE_TARGETS = cscope
am__DIST_COMMON = $(srcdir)/Doxyfile.in $(srcdir)/Makefile.in \
README.md compile config.guess config.sub depcomp install-sh \
missing
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
distdir = $(PACKAGE)-$(VERSION)
top_distdir = $(distdir)
am__remove_distdir = \
if test -d "$(distdir)"; then \
find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \
&& rm -rf "$(distdir)" \
|| { sleep 5 && rm -rf "$(distdir)"; }; \
else :; fi
am__post_remove_distdir = $(am__remove_distdir)
DIST_ARCHIVES = $(distdir).tar.gz
GZIP_ENV = --best
DIST_TARGETS = dist-gzip
# Exists only to be overridden by the user if desired.
AM_DISTCHECK_DVI_TARGET = dvi
distuninstallcheck_listfiles = find . -type f -print
am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
| sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
distcleancheck_listfiles = find . -type f -print
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
# This has $(fuse_LIBS) in it because there's a bug in the fuse pkgconf:
# it should add -pthread to CFLAGS but doesn't.
# $(NUCLA) is explained in configure.ac.
CFLAGS = -g -O2 -Wall -Wextra -Wshadow $(NUCLA) \
-rdynamic -D_GNU_SOURCE -DVERSION=\"$(VERSION)\"\
$(pkgconf_CFLAGS) $(fuse_CFLAGS) $(fuse_LIBS)
CPPFLAGS = @CPPFLAGS@
CSCOPE = @CSCOPE@
CTAGS = @CTAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
ETAGS = @ETAGS@
EXEEXT = @EXEEXT@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@ $(pkgconf_LIBS) $(fuse_LIBS)
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
MKDIR_P = @MKDIR_P@
NUCLA = @NUCLA@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
VERSION = @VERSION@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_CC = @ac_ct_CC@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
fuse_CFLAGS = @fuse_CFLAGS@
fuse_LIBS = @fuse_LIBS@
host_alias = @host_alias@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
pkgconf_CFLAGS = @pkgconf_CFLAGS@
pkgconf_LIBS = @pkgconf_LIBS@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
runstatedir = @runstatedir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
httpdirfs_SOURCES = src/main.c src/network.c src/fuse_local.c src/link.c \
src/cache.c src/util.c src/sonic.c src/log.c src/config.c src/memcache.c
man_MANS = doc/man/httpdirfs.1
CLEANFILES = doc/man/*
DISTCLEANFILES = doc/html/*
all: all-am
.SUFFIXES:
.SUFFIXES: .c .o .obj
am--refresh: Makefile
@:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \
$(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --foreign Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
echo ' $(SHELL) ./config.status'; \
$(SHELL) ./config.status;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \
cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
$(SHELL) ./config.status --recheck
$(top_srcdir)/configure: $(am__configure_deps)
$(am__cd) $(srcdir) && $(AUTOCONF)
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
$(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
$(am__aclocal_m4_deps):
Doxyfile: $(top_builddir)/config.status $(srcdir)/Doxyfile.in
cd $(top_builddir) && $(SHELL) ./config.status $@
install-binPROGRAMS: $(bin_PROGRAMS)
@$(NORMAL_INSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(bindir)'"; \
$(MKDIR_P) "$(DESTDIR)$(bindir)" || exit 1; \
fi; \
for p in $$list; do echo "$$p $$p"; done | \
sed 's/$(EXEEXT)$$//' | \
while read p p1; do if test -f $$p \
; then echo "$$p"; echo "$$p"; else :; fi; \
done | \
sed -e 'p;s,.*/,,;n;h' \
-e 's|.*|.|' \
-e 'p;x;s,.*/,,;s/$(EXEEXT)$$//;$(transform);s/$$/$(EXEEXT)/' | \
sed 'N;N;N;s,\n, ,g' | \
$(AWK) 'BEGIN { files["."] = ""; dirs["."] = 1 } \
{ d=$$3; if (dirs[d] != 1) { print "d", d; dirs[d] = 1 } \
if ($$2 == $$4) files[d] = files[d] " " $$1; \
else { print "f", $$3 "/" $$4, $$1; } } \
END { for (d in files) print "f", d, files[d] }' | \
while read type dir files; do \
if test "$$dir" = .; then dir=; else dir=/$$dir; fi; \
test -z "$$files" || { \
echo " $(INSTALL_PROGRAM_ENV) $(INSTALL_PROGRAM) $$files '$(DESTDIR)$(bindir)$$dir'"; \
$(INSTALL_PROGRAM_ENV) $(INSTALL_PROGRAM) $$files "$(DESTDIR)$(bindir)$$dir" || exit $$?; \
} \
; done
uninstall-binPROGRAMS:
@$(NORMAL_UNINSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
files=`for p in $$list; do echo "$$p"; done | \
sed -e 'h;s,^.*/,,;s/$(EXEEXT)$$//;$(transform)' \
-e 's/$$/$(EXEEXT)/' \
`; \
test -n "$$list" || exit 0; \
echo " ( cd '$(DESTDIR)$(bindir)' && rm -f" $$files ")"; \
cd "$(DESTDIR)$(bindir)" && rm -f $$files
clean-binPROGRAMS:
-test -z "$(bin_PROGRAMS)" || rm -f $(bin_PROGRAMS)
src/$(am__dirstamp):
@$(MKDIR_P) src
@: > src/$(am__dirstamp)
src/$(DEPDIR)/$(am__dirstamp):
@$(MKDIR_P) src/$(DEPDIR)
@: > src/$(DEPDIR)/$(am__dirstamp)
src/main.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/network.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/fuse_local.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/link.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/cache.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/util.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/sonic.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/log.$(OBJEXT): src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
src/config.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
src/memcache.$(OBJEXT): src/$(am__dirstamp) \
src/$(DEPDIR)/$(am__dirstamp)
httpdirfs$(EXEEXT): $(httpdirfs_OBJECTS) $(httpdirfs_DEPENDENCIES) $(EXTRA_httpdirfs_DEPENDENCIES)
@rm -f httpdirfs$(EXEEXT)
$(AM_V_CCLD)$(LINK) $(httpdirfs_OBJECTS) $(httpdirfs_LDADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
-rm -f src/*.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/cache.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/config.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/fuse_local.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/link.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/log.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/main.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/memcache.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/network.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/sonic.Po@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/util.Po@am__quote@ # am--include-marker
$(am__depfiles_remade):
@$(MKDIR_P) $(@D)
@echo '# dummy' >$@-t && $(am__mv) $@-t $@
am--depfiles: $(am__depfiles_remade)
.c.o:
@am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\
@am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $<
.c.obj:
@am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.obj$$||'`;\
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ `$(CYGPATH_W) '$<'` &&\
@am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
install-man1: $(man_MANS)
@$(NORMAL_INSTALL)
@list1=''; \
list2='$(man_MANS)'; \
test -n "$(man1dir)" \
&& test -n "`echo $$list1$$list2`" \
|| exit 0; \
echo " $(MKDIR_P) '$(DESTDIR)$(man1dir)'"; \
$(MKDIR_P) "$(DESTDIR)$(man1dir)" || exit 1; \
{ for i in $$list1; do echo "$$i"; done; \
if test -n "$$list2"; then \
for i in $$list2; do echo "$$i"; done \
| sed -n '/\.1[a-z]*$$/p'; \
fi; \
} | while read p; do \
if test -f $$p; then d=; else d="$(srcdir)/"; fi; \
echo "$$d$$p"; echo "$$p"; \
done | \
sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
-e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \
sed 'N;N;s,\n, ,g' | { \
list=; while read file base inst; do \
if test "$$base" = "$$inst"; then list="$$list $$file"; else \
echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man1dir)/$$inst'"; \
$(INSTALL_DATA) "$$file" "$(DESTDIR)$(man1dir)/$$inst" || exit $$?; \
fi; \
done; \
for i in $$list; do echo "$$i"; done | $(am__base_list) | \
while read files; do \
test -z "$$files" || { \
echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man1dir)'"; \
$(INSTALL_DATA) $$files "$(DESTDIR)$(man1dir)" || exit $$?; }; \
done; }
uninstall-man1:
@$(NORMAL_UNINSTALL)
@list=''; test -n "$(man1dir)" || exit 0; \
files=`{ for i in $$list; do echo "$$i"; done; \
l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \
sed -n '/\.1[a-z]*$$/p'; \
} | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \
-e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \
dir='$(DESTDIR)$(man1dir)'; $(am__uninstall_files_from_dir)
ID: $(am__tagged_files)
$(am__define_uniq_tagged_files); mkid -fID $$unique
tags: tags-am
TAGS: tags
tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
set x; \
here=`pwd`; \
$(am__define_uniq_tagged_files); \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: ctags-am
CTAGS: ctags
ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
$(am__define_uniq_tagged_files); \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscope: cscope.files
test ! -s cscope.files \
|| $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS)
clean-cscope:
-rm -f cscope.files
cscope.files: clean-cscope cscopelist
cscopelist: cscopelist-am
cscopelist-am: $(am__tagged_files)
list='$(am__tagged_files)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
-rm -f cscope.out cscope.in.out cscope.po.out cscope.files
distdir: $(BUILT_SOURCES)
$(MAKE) $(AM_MAKEFLAGS) distdir-am
distdir-am: $(DISTFILES)
$(am__remove_distdir)
test -d "$(distdir)" || mkdir "$(distdir)"
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
-test -n "$(am__skip_mode_fix)" \
|| find "$(distdir)" -type d ! -perm -755 \
-exec chmod u+rwx,go+rx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \; -o \
! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
|| chmod -R a+r "$(distdir)"
dist-gzip: distdir
tardir=$(distdir) && $(am__tar) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).tar.gz
$(am__post_remove_distdir)
dist-bzip2: distdir
tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2
$(am__post_remove_distdir)
dist-lzip: distdir
tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz
$(am__post_remove_distdir)
dist-xz: distdir
tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz
$(am__post_remove_distdir)
dist-zstd: distdir
tardir=$(distdir) && $(am__tar) | zstd -c $${ZSTD_CLEVEL-$${ZSTD_OPT--19}} >$(distdir).tar.zst
$(am__post_remove_distdir)
dist-tarZ: distdir
@echo WARNING: "Support for distribution archives compressed with" \
"legacy program 'compress' is deprecated." >&2
@echo WARNING: "It will be removed altogether in Automake 2.0" >&2
tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
$(am__post_remove_distdir)
dist-shar: distdir
@echo WARNING: "Support for shar distribution archives is" \
"deprecated." >&2
@echo WARNING: "It will be removed altogether in Automake 2.0" >&2
shar $(distdir) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).shar.gz
$(am__post_remove_distdir)
dist-zip: distdir
-rm -f $(distdir).zip
zip -rq $(distdir).zip $(distdir)
$(am__post_remove_distdir)
dist dist-all:
$(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:'
$(am__post_remove_distdir)
# This target untars the dist file and tries a VPATH configuration. Then
# it guarantees that the distribution is self-contained by making another
# tarfile.
distcheck: dist
case '$(DIST_ARCHIVES)' in \
*.tar.gz*) \
eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).tar.gz | $(am__untar) ;;\
*.tar.bz2*) \
bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\
*.tar.lz*) \
lzip -dc $(distdir).tar.lz | $(am__untar) ;;\
*.tar.xz*) \
xz -dc $(distdir).tar.xz | $(am__untar) ;;\
*.tar.Z*) \
uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
*.shar.gz*) \
eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\
*.zip*) \
unzip $(distdir).zip ;;\
*.tar.zst*) \
zstd -dc $(distdir).tar.zst | $(am__untar) ;;\
esac
chmod -R a-w $(distdir)
chmod u+w $(distdir)
mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst
chmod a-w $(distdir)
test -d $(distdir)/_build || exit 0; \
dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
&& dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
&& am__cwd=`pwd` \
&& $(am__cd) $(distdir)/_build/sub \
&& ../../configure \
$(AM_DISTCHECK_CONFIGURE_FLAGS) \
$(DISTCHECK_CONFIGURE_FLAGS) \
--srcdir=../.. --prefix="$$dc_install_base" \
&& $(MAKE) $(AM_MAKEFLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) $(AM_DISTCHECK_DVI_TARGET) \
&& $(MAKE) $(AM_MAKEFLAGS) check \
&& $(MAKE) $(AM_MAKEFLAGS) install \
&& $(MAKE) $(AM_MAKEFLAGS) installcheck \
&& $(MAKE) $(AM_MAKEFLAGS) uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
distuninstallcheck \
&& chmod -R a-w "$$dc_install_base" \
&& ({ \
(cd ../.. && umask 077 && mkdir "$$dc_destdir") \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
} || { rm -rf "$$dc_destdir"; exit 1; }) \
&& rm -rf "$$dc_destdir" \
&& $(MAKE) $(AM_MAKEFLAGS) dist \
&& rm -rf $(DIST_ARCHIVES) \
&& $(MAKE) $(AM_MAKEFLAGS) distcleancheck \
&& cd "$$am__cwd" \
|| exit 1
$(am__post_remove_distdir)
@(echo "$(distdir) archives ready for distribution: "; \
list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
distuninstallcheck:
@test -n '$(distuninstallcheck_dir)' || { \
echo 'ERROR: trying to run $@ with an empty' \
'$$(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
$(am__cd) '$(distuninstallcheck_dir)' || { \
echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \
exit 1; \
}; \
test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left after uninstall:" ; \
if test -n "$(DESTDIR)"; then \
echo " (check DESTDIR support)"; \
fi ; \
$(distuninstallcheck_listfiles) ; \
exit 1; } >&2
distcleancheck: distclean
@if test '$(srcdir)' = . ; then \
echo "ERROR: distcleancheck can only run from a VPATH build" ; \
exit 1 ; \
fi
@test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left in build directory after distclean:" ; \
$(distcleancheck_listfiles) ; \
exit 1; } >&2
check-am: all-am
check: check-am
all-am: Makefile $(PROGRAMS) $(MANS)
installdirs:
for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
-test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
-rm -f src/$(DEPDIR)/$(am__dirstamp)
-rm -f src/$(am__dirstamp)
-test -z "$(DISTCLEANFILES)" || rm -f $(DISTCLEANFILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-binPROGRAMS clean-generic mostlyclean-am
distclean: distclean-am
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -f src/$(DEPDIR)/cache.Po
-rm -f src/$(DEPDIR)/config.Po
-rm -f src/$(DEPDIR)/fuse_local.Po
-rm -f src/$(DEPDIR)/link.Po
-rm -f src/$(DEPDIR)/log.Po
-rm -f src/$(DEPDIR)/main.Po
-rm -f src/$(DEPDIR)/memcache.Po
-rm -f src/$(DEPDIR)/network.Po
-rm -f src/$(DEPDIR)/sonic.Po
-rm -f src/$(DEPDIR)/util.Po
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am: install-man
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am: install-binPROGRAMS
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man: install-man1
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -rf $(top_srcdir)/autom4te.cache
-rm -f src/$(DEPDIR)/cache.Po
-rm -f src/$(DEPDIR)/config.Po
-rm -f src/$(DEPDIR)/fuse_local.Po
-rm -f src/$(DEPDIR)/link.Po
-rm -f src/$(DEPDIR)/log.Po
-rm -f src/$(DEPDIR)/main.Po
-rm -f src/$(DEPDIR)/memcache.Po
-rm -f src/$(DEPDIR)/network.Po
-rm -f src/$(DEPDIR)/sonic.Po
-rm -f src/$(DEPDIR)/util.Po
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-binPROGRAMS uninstall-man
uninstall-man: uninstall-man1
.MAKE: install-am install-strip
.PHONY: CTAGS GTAGS TAGS all all-am am--depfiles am--refresh check \
check-am clean clean-binPROGRAMS clean-cscope clean-generic \
cscope cscopelist-am ctags ctags-am dist dist-all dist-bzip2 \
dist-gzip dist-lzip dist-shar dist-tarZ dist-xz dist-zip \
dist-zstd distcheck distclean distclean-compile \
distclean-generic distclean-tags distcleancheck distdir \
distuninstallcheck dvi dvi-am html html-am info info-am \
install install-am install-binPROGRAMS install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am install-info \
install-info-am install-man install-man1 install-pdf \
install-pdf-am install-ps install-ps-am install-strip \
installcheck installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-compile \
mostlyclean-generic pdf pdf-am ps ps-am tags tags-am uninstall \
uninstall-am uninstall-binPROGRAMS uninstall-man \
uninstall-man1
.PRECIOUS: Makefile
# %.o: $(srcdir)/src/%.c
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -c -o $@ $<
# httpdirfs: $(COBJS)
# $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $^ $(LIBS)
man: doc/man/httpdirfs.1
doc/man/httpdirfs.1: httpdirfs
mkdir -p doc/man
rm -f doc/man/httpdirfs.1.tmp
help2man --name "mount HTTP directory as a virtual filesystem" \
--no-discard-stderr ./httpdirfs > doc/man/httpdirfs.1.tmp
mv doc/man/httpdirfs.1.tmp doc/man/httpdirfs.1
doc:
doxygen Doxyfile
format:
astyle --style=kr --align-pointer=name --max-code-length=80 src/*.c src/*.h
.PHONY: man doc format
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,3 +1,8 @@
[![CodeQL](https://github.com/fangfufu/httpdirfs/actions/workflows/codeql.yml/badge.svg)](https://github.com/fangfufu/httpdirfs/actions/workflows/codeql.yml)
[![CodeFactor](https://www.codefactor.io/repository/github/fangfufu/httpdirfs/badge)](https://www.codefactor.io/repository/github/fangfufu/httpdirfs)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/30af0a5b4d6f4a4d83ddb68f5193ad23)](https://app.codacy.com/gh/fangfufu/httpdirfs/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=fangfufu_httpdirfs&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=fangfufu_httpdirfs)
# HTTPDirFS - HTTP Directory Filesystem with a permanent cache, and Airsonic / Subsonic server support!
Have you ever wanted to mount those HTTP directory listings as if it was a
@ -22,9 +27,9 @@ present a HTTP directory listing.
## Installation
Please note if you install HTTDirFS from a repository, it can be outdated.
### Debian 11 "Bullseye"
HTTPDirFS is available as a package in Debian 11 "Bullseye", If you are on
Debian Bullseye, you can simply run the following
### Debian 12 "Bookworm"
HTTPDirFS is available as a package in Debian 12 "Bookworm", If you are on
Debian Bookworm, you can simply run the following
command as ``root``:
apt install httpdirfs
@ -42,49 +47,37 @@ HTTPDirFS is available in the
## Compilation
### Ubuntu
Under Ubuntu 18.04.4 LTS, you need the following packages:
Under Ubuntu 22.04 LTS, you need the following packages:
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev help2man
libexpat1-dev pkg-config autoconf
### Debian 11 "Bullseye" and Debian 10 "Buster"
Under Debian 10 "Buster" and newer versions, you need the following packages:
### Debian 12 "Bookworm"
Under Debian 12 "Bookworm" and newer versions, you need the following packages:
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev
### Debian 9 "Stretch"
Under Debian 9 "Stretch", you need the following packages:
libgumbo-dev libfuse-dev libssl1.0-dev libcurl4-openssl-dev
If you get the following warnings during compilation,
/usr/bin/ld: warning: libcrypto.so.1.0.2, needed by /usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/libcurl.so, may conflict with libcrypto.so.1.1
then this program will crash if you connect to HTTPS website. You need to check
if you have ``libssl1.0-dev`` installed rather than ``libssl-dev``.
This is you likely have the binaries of OpenSSL 1.0.2 installed alongside with
the header files for OpenSSL 1.1. The header files for OpenSSL 1.0.2 link in
additional mutex related callback functions, whereas the header files for
OpenSSL 1.1 do not.
You can check your SSL engine version using the ``--version`` flag.
libgumbo-dev libfuse-dev libssl-dev libcurl4-openssl-dev uuid-dev help2man
libexpat1-dev pkg-config autoconf
### FreeBSD
The following dependencies are required from either pkg or ports:
Packages:
gmake fusefs-libs gumbo e2fsprogs-libuuid curl expat
gmake fusefs-libs gumbo e2fsprogs-libuuid curl expat pkgconf help2man
If you want to be ableto build the documentation ("gmake doc") you also need
doxygen (devel/doxygen).
Ports:
devel/gmake sysutils/fusefs-libs devel/gumbo misc/e2fsprogs-libuuid ftp/curl textproc/expat2
devel/gmake sysutils/fusefs-libs devel/gumbo misc/e2fsprogs-libuuid ftp/curl textproc/expat2 devel/pkgconf devel/doxygen misc/help2man
**Note:** If you want brotli compression support, you will need to install curl
from ports and enable the option.
You can then build + install with:
./configure
gmake
sudo gmake install
@ -92,12 +85,16 @@ Alternatively, you may use the FreeBSD [ports(7)](https://man.freebsd.org/ports/
infrastructure to build HTTPDirFS from source with the modifications you need.
### macOS
You need to install macFUSE, cURL, gumbo, and OpenSSL from Homebrew:
You need to install some packages from Homebrew:
brew install macfuse curl gumbo-parser openssl pkg-config
brew install macfuse curl gumbo-parser openssl pkg-config help2man
If you want to be able to build the documentation ("make doc") you also need
help2man, doxygen, and graphviz.
Build and install:
./configure
make
sudo make install
@ -139,7 +136,7 @@ HTTPDirFS options:
--retry-wait Set delay in seconds before retrying an HTTP request
after encountering an error. (default: 5)
--user-agent Set user agent string (default: "HTTPDirFS")
--no-range-check Disable the build-in check for the server's support
--no-range-check Disable the built-in check for the server's support
for HTTP range requests
--insecure-tls Disable licurl TLS certificate verification by
setting CURLOPT_SSL_VERIFYHOST to 0
@ -268,7 +265,7 @@ Alternatively, you can specify your own configuration file by using the
### Log levels
You can control how much log HTTPDirFS outputs by setting the
``HTTPDIRFS_LOG_LEVEL`` enviromental variable. For details of the different
``HTTPDIRFS_LOG_LEVEL`` environmental variable. For details of the different
types of log that are supported, please refer to
[log.h](https://github.com/fangfufu/httpdirfs/blob/master/src/log.h) and
[log.c](https://github.com/fangfufu/httpdirfs/blob/master/src/log.c).
@ -310,6 +307,8 @@ for the technical and moral support. Your wisdom is much appreciated!
compatibility patches.
- I would like to thank [hiliev](https://github.com/hiliev) for providing macOS
compatibility patches.
- I would like to thank [Jonathan Kamens](https://github.com/jikamens) for providing
a whole bunch of code improvements and the improved build system.
- I would like to thank [-Archivist](https://www.reddit.com/user/-Archivist/)
for not providing FTP or WebDAV access to his server. This piece of software was
written in direct response to his appalling behaviour.

1548
aclocal.m4 vendored Normal file

File diff suppressed because it is too large Load Diff

343
compile Executable file
View File

@ -0,0 +1,343 @@
#! /bin/sh
# Wrapper for compilers which do not understand '-c -o'.
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1999-2021 Free Software Foundation, Inc.
# Written by Tom Tromey <tromey@cygnus.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to <bug-automake@gnu.org> or send patches to
# <automake-patches@gnu.org>.
nl='
'
# We need space, tab and new line, in precisely that order. Quoting is
# there to prevent tools from complaining about whitespace usage.
IFS=" "" $nl"
file_conv=
# func_file_conv build_file lazy
# Convert a $build file to $host form and store it in $file
# Currently only supports Windows hosts. If the determined conversion
# type is listed in (the comma separated) LAZY, no conversion will
# take place.
func_file_conv ()
{
file=$1
case $file in
/ | /[!/]*) # absolute file, and not a UNC file
if test -z "$file_conv"; then
# lazily determine how to convert abs files
case `uname -s` in
MINGW*)
file_conv=mingw
;;
CYGWIN* | MSYS*)
file_conv=cygwin
;;
*)
file_conv=wine
;;
esac
fi
case $file_conv/,$2, in
*,$file_conv,*)
;;
mingw/*)
file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'`
;;
cygwin/* | msys/*)
file=`cygpath -m "$file" || echo "$file"`
;;
wine/*)
file=`winepath -w "$file" || echo "$file"`
;;
esac
;;
esac
}
# func_cl_dashL linkdir
# Make cl look for libraries in LINKDIR
func_cl_dashL ()
{
func_file_conv "$1"
if test -z "$lib_path"; then
lib_path=$file
else
lib_path="$lib_path;$file"
fi
linker_opts="$linker_opts -LIBPATH:$file"
}
# func_cl_dashl library
# Do a library search-path lookup for cl
func_cl_dashl ()
{
lib=$1
found=no
save_IFS=$IFS
IFS=';'
for dir in $lib_path $LIB
do
IFS=$save_IFS
if $shared && test -f "$dir/$lib.dll.lib"; then
found=yes
lib=$dir/$lib.dll.lib
break
fi
if test -f "$dir/$lib.lib"; then
found=yes
lib=$dir/$lib.lib
break
fi
if test -f "$dir/lib$lib.a"; then
found=yes
lib=$dir/lib$lib.a
break
fi
done
IFS=$save_IFS
if test "$found" != yes; then
lib=$lib.lib
fi
}
# func_cl_wrapper cl arg...
# Adjust compile command to suit cl
func_cl_wrapper ()
{
# Assume a capable shell
lib_path=
shared=:
linker_opts=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
eat=1
case $2 in
*.o | *.[oO][bB][jJ])
func_file_conv "$2"
set x "$@" -Fo"$file"
shift
;;
*)
func_file_conv "$2"
set x "$@" -Fe"$file"
shift
;;
esac
;;
-I)
eat=1
func_file_conv "$2" mingw
set x "$@" -I"$file"
shift
;;
-I*)
func_file_conv "${1#-I}" mingw
set x "$@" -I"$file"
shift
;;
-l)
eat=1
func_cl_dashl "$2"
set x "$@" "$lib"
shift
;;
-l*)
func_cl_dashl "${1#-l}"
set x "$@" "$lib"
shift
;;
-L)
eat=1
func_cl_dashL "$2"
;;
-L*)
func_cl_dashL "${1#-L}"
;;
-static)
shared=false
;;
-Wl,*)
arg=${1#-Wl,}
save_ifs="$IFS"; IFS=','
for flag in $arg; do
IFS="$save_ifs"
linker_opts="$linker_opts $flag"
done
IFS="$save_ifs"
;;
-Xlinker)
eat=1
linker_opts="$linker_opts $2"
;;
-*)
set x "$@" "$1"
shift
;;
*.cc | *.CC | *.cxx | *.CXX | *.[cC]++)
func_file_conv "$1"
set x "$@" -Tp"$file"
shift
;;
*.c | *.cpp | *.CPP | *.lib | *.LIB | *.Lib | *.OBJ | *.obj | *.[oO])
func_file_conv "$1" mingw
set x "$@" "$file"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -n "$linker_opts"; then
linker_opts="-link$linker_opts"
fi
exec "$@" $linker_opts
exit 1
}
eat=
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: compile [--help] [--version] PROGRAM [ARGS]
Wrapper for compilers which do not understand '-c -o'.
Remove '-o dest.o' from ARGS, run PROGRAM with the remaining
arguments, and rename the output as expected.
If you are trying to build a whole package this is not the
right script to run: please start by reading the file 'INSTALL'.
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "compile $scriptversion"
exit $?
;;
cl | *[/\\]cl | cl.exe | *[/\\]cl.exe | \
icl | *[/\\]icl | icl.exe | *[/\\]icl.exe )
func_cl_wrapper "$@" # Doesn't return...
;;
esac
ofile=
cfile=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
# So we strip '-o arg' only if arg is an object.
eat=1
case $2 in
*.o | *.obj)
ofile=$2
;;
*)
set x "$@" -o "$2"
shift
;;
esac
;;
*.c)
cfile=$1
set x "$@" "$1"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -z "$ofile" || test -z "$cfile"; then
# If no '-o' option was seen then we might have been invoked from a
# pattern rule where we don't need one. That is ok -- this is a
# normal compilation that the losing compiler can handle. If no
# '.c' file was seen then we are probably linking. That is also
# ok.
exec "$@"
fi
# Name of file we expect compiler to create.
cofile=`echo "$cfile" | sed 's|^.*[\\/]||; s|^[a-zA-Z]:||; s/\.c$/.o/'`
# Create the lock directory.
# Note: use '[/\\:.-]' here to ensure that we don't use the same name
# that we are using for the .o file. Also, base the name on the expected
# object file name, since that is what matters with a parallel build.
lockdir=`echo "$cofile" | sed -e 's|[/\\:.-]|_|g'`.d
while true; do
if mkdir "$lockdir" >/dev/null 2>&1; then
break
fi
sleep 1
done
# FIXME: race condition here if user kills between mkdir and trap.
trap "rmdir '$lockdir'; exit 1" 1 2 15
# Run the compile.
"$@"
ret=$?
if test -f "$cofile"; then
test "$cofile" = "$ofile" || mv "$cofile" "$ofile"
elif test -f "${cofile}bj"; then
test "${cofile}bj" = "$ofile" || mv "${cofile}bj" "$ofile"
fi
rmdir "$lockdir"
exit $ret
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# End:

1747
config.guess vendored Executable file

File diff suppressed because it is too large Load Diff

1883
config.sub vendored Executable file

File diff suppressed because it is too large Load Diff

5952
configure vendored Executable file

File diff suppressed because it is too large Load Diff

14
configure.ac Normal file
View File

@ -0,0 +1,14 @@
AC_INIT([httpdirfs],[1.2.5])
AC_CANONICAL_BUILD
AC_CONFIG_FILES([Makefile Doxyfile])
AC_PROG_CC
AC_SEARCH_LIBS([backtrace],[execinfo])
# Because we use $(fuse_LIBS) in $(CFLAGS); see comment in Makefile.in
AX_CHECK_COMPILE_FLAG([-Wunused-command-line-argument],[NUCLA=-Wno-unused-command-line-argument],[-Werror])
AC_SUBST([NUCLA])
AM_INIT_AUTOMAKE([foreign subdir-objects])
PKG_CHECK_MODULES([pkgconf],[gumbo libcurl uuid expat openssl])
# This is separate because we need to be able to use $(fuse_LIBS) in CFLAGS
PKG_CHECK_MODULES([fuse],[fuse])
AC_OUTPUT

786
depcomp Executable file
View File

@ -0,0 +1,786 @@
#! /bin/sh
# depcomp - compile a program generating dependencies as side-effects
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1999-2021 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# Originally written by Alexandre Oliva <oliva@dcc.unicamp.br>.
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: depcomp [--help] [--version] PROGRAM [ARGS]
Run PROGRAMS ARGS to compile a file, generating dependencies
as side-effects.
Environment variables:
depmode Dependency tracking mode.
source Source file read by 'PROGRAMS ARGS'.
object Object file output by 'PROGRAMS ARGS'.
DEPDIR directory where to store dependencies.
depfile Dependency file to output.
tmpdepfile Temporary file to use when outputting dependencies.
libtool Whether libtool is used (yes/no).
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "depcomp $scriptversion"
exit $?
;;
esac
# Get the directory component of the given path, and save it in the
# global variables '$dir'. Note that this directory component will
# be either empty or ending with a '/' character. This is deliberate.
set_dir_from ()
{
case $1 in
*/*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;;
*) dir=;;
esac
}
# Get the suffix-stripped basename of the given path, and save it the
# global variable '$base'.
set_base_from ()
{
base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'`
}
# If no dependency file was actually created by the compiler invocation,
# we still have to create a dummy depfile, to avoid errors with the
# Makefile "include basename.Plo" scheme.
make_dummy_depfile ()
{
echo "#dummy" > "$depfile"
}
# Factor out some common post-processing of the generated depfile.
# Requires the auxiliary global variable '$tmpdepfile' to be set.
aix_post_process_depfile ()
{
# If the compiler actually managed to produce a dependency file,
# post-process it.
if test -f "$tmpdepfile"; then
# Each line is of the form 'foo.o: dependency.h'.
# Do two passes, one to just change these to
# $object: dependency.h
# and one to simply output
# dependency.h:
# which is needed to avoid the deleted-header problem.
{ sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile"
sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile"
} > "$depfile"
rm -f "$tmpdepfile"
else
make_dummy_depfile
fi
}
# A tabulation character.
tab=' '
# A newline character.
nl='
'
# Character ranges might be problematic outside the C locale.
# These definitions help.
upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ
lower=abcdefghijklmnopqrstuvwxyz
digits=0123456789
alpha=${upper}${lower}
if test -z "$depmode" || test -z "$source" || test -z "$object"; then
echo "depcomp: Variables source, object and depmode must be set" 1>&2
exit 1
fi
# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
depfile=${depfile-`echo "$object" |
sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
rm -f "$tmpdepfile"
# Avoid interferences from the environment.
gccflag= dashmflag=
# Some modes work just like other modes, but use different flags. We
# parameterize here, but still list the modes in the big case below,
# to make depend.m4 easier to write. Note that we *cannot* use a case
# here, because this file can only contain one case statement.
if test "$depmode" = hp; then
# HP compiler uses -M and no extra arg.
gccflag=-M
depmode=gcc
fi
if test "$depmode" = dashXmstdout; then
# This is just like dashmstdout with a different argument.
dashmflag=-xM
depmode=dashmstdout
fi
cygpath_u="cygpath -u -f -"
if test "$depmode" = msvcmsys; then
# This is just like msvisualcpp but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvisualcpp
fi
if test "$depmode" = msvc7msys; then
# This is just like msvc7 but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvc7
fi
if test "$depmode" = xlc; then
# IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information.
gccflag=-qmakedep=gcc,-MF
depmode=gcc
fi
case "$depmode" in
gcc3)
## gcc 3 implements dependency tracking that does exactly what
## we want. Yay! Note: for some reason libtool 1.4 doesn't like
## it if -MD -MP comes after the -MF stuff. Hmm.
## Unfortunately, FreeBSD c89 acceptance of flags depends upon
## the command line argument order; so add the flags where they
## appear in depend2.am. Note that the slowdown incurred here
## affects only configure: in makefiles, %FASTDEP% shortcuts this.
for arg
do
case $arg in
-c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;;
*) set fnord "$@" "$arg" ;;
esac
shift # fnord
shift # $arg
done
"$@"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
mv "$tmpdepfile" "$depfile"
;;
gcc)
## Note that this doesn't just cater to obsosete pre-3.x GCC compilers.
## but also to in-use compilers like IMB xlc/xlC and the HP C compiler.
## (see the conditional assignment to $gccflag above).
## There are various ways to get dependency output from gcc. Here's
## why we pick this rather obscure method:
## - Don't want to use -MD because we'd like the dependencies to end
## up in a subdir. Having to rename by hand is ugly.
## (We might end up doing this anyway to support other compilers.)
## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
## -MM, not -M (despite what the docs say). Also, it might not be
## supported by the other compilers which use the 'gcc' depmode.
## - Using -M directly means running the compiler twice (even worse
## than renaming).
if test -z "$gccflag"; then
gccflag=-MD,
fi
"$@" -Wp,"$gccflag$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The second -e expression handles DOS-style file names with drive
# letters.
sed -e 's/^[^:]*: / /' \
-e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
## This next piece of magic avoids the "deleted header file" problem.
## The problem is that when a header file which appears in a .P file
## is deleted, the dependency causes make to die (because there is
## typically no way to rebuild the header). We avoid this by adding
## dummy dependencies for each header file. Too bad gcc doesn't do
## this for us directly.
## Some versions of gcc put a space before the ':'. On the theory
## that the space means something, we add a space to the output as
## well. hp depmode also adds that space, but also prefixes the VPATH
## to the object. Take care to not repeat it in the output.
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
sgi)
if test "$libtool" = yes; then
"$@" "-Wp,-MDupdate,$tmpdepfile"
else
"$@" -MDupdate "$tmpdepfile"
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
echo "$object : \\" > "$depfile"
# Clip off the initial element (the dependent). Don't try to be
# clever and replace this with sed code, as IRIX sed won't handle
# lines with more than a fixed number of characters (4096 in
# IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
# the IRIX cc adds comments like '#:fec' to the end of the
# dependency line.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \
| tr "$nl" ' ' >> "$depfile"
echo >> "$depfile"
# The second pass generates a dummy entry for each header file.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
>> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile"
;;
xlc)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
aix)
# The C for AIX Compiler uses -M and outputs the dependencies
# in a .u file. In older versions, this file always lives in the
# current directory. Also, the AIX compiler puts '$object:' at the
# start of each line; $object doesn't have directory information.
# Version 6 uses the directory in both cases.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.u
tmpdepfile2=$base.u
tmpdepfile3=$dir.libs/$base.u
"$@" -Wc,-M
else
tmpdepfile1=$dir$base.u
tmpdepfile2=$dir$base.u
tmpdepfile3=$dir$base.u
"$@" -M
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
aix_post_process_depfile
;;
tcc)
# tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26
# FIXME: That version still under development at the moment of writing.
# Make that this statement remains true also for stable, released
# versions.
# It will wrap lines (doesn't matter whether long or short) with a
# trailing '\', as in:
#
# foo.o : \
# foo.c \
# foo.h \
#
# It will put a trailing '\' even on the last line, and will use leading
# spaces rather than leading tabs (at least since its commit 0394caf7
# "Emit spaces for -MD").
"$@" -MD -MF "$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each non-empty line is of the form 'foo.o : \' or ' dep.h \'.
# We have to change lines of the first kind to '$object: \'.
sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile"
# And for each line of the second kind, we have to emit a 'dep.h:'
# dummy dependency, to avoid the deleted-header problem.
sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile"
rm -f "$tmpdepfile"
;;
## The order of this option in the case statement is important, since the
## shell code in configure will try each of these formats in the order
## listed in this file. A plain '-MD' option would be understood by many
## compilers, so we must ensure this comes after the gcc and icc options.
pgcc)
# Portland's C compiler understands '-MD'.
# Will always output deps to 'file.d' where file is the root name of the
# source file under compilation, even if file resides in a subdirectory.
# The object file name does not affect the name of the '.d' file.
# pgcc 10.2 will output
# foo.o: sub/foo.c sub/foo.h
# and will wrap long lines using '\' :
# foo.o: sub/foo.c ... \
# sub/foo.h ... \
# ...
set_dir_from "$object"
# Use the source, not the object, to determine the base name, since
# that's sadly what pgcc will do too.
set_base_from "$source"
tmpdepfile=$base.d
# For projects that build the same source file twice into different object
# files, the pgcc approach of using the *source* file root name can cause
# problems in parallel builds. Use a locking strategy to avoid stomping on
# the same $tmpdepfile.
lockdir=$base.d-lock
trap "
echo '$0: caught signal, cleaning up...' >&2
rmdir '$lockdir'
exit 1
" 1 2 13 15
numtries=100
i=$numtries
while test $i -gt 0; do
# mkdir is a portable test-and-set.
if mkdir "$lockdir" 2>/dev/null; then
# This process acquired the lock.
"$@" -MD
stat=$?
# Release the lock.
rmdir "$lockdir"
break
else
# If the lock is being held by a different process, wait
# until the winning process is done or we timeout.
while test -d "$lockdir" && test $i -gt 0; do
sleep 1
i=`expr $i - 1`
done
fi
i=`expr $i - 1`
done
trap - 1 2 13 15
if test $i -le 0; then
echo "$0: failed to acquire lock after $numtries attempts" >&2
echo "$0: check lockdir '$lockdir'" >&2
exit 1
fi
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each line is of the form `foo.o: dependent.h',
# or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp2)
# The "hp" stanza above does not work with aCC (C++) and HP's ia64
# compilers, which have integrated preprocessors. The correct option
# to use with these is +Maked; it writes dependencies to a file named
# 'foo.d', which lands next to the object file, wherever that
# happens to be.
# Much of this is similar to the tru64 case; see comments there.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir.libs/$base.d
"$@" -Wc,+Maked
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
"$@" +Maked
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2"
do
test -f "$tmpdepfile" && break
done
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile"
# Add 'dependent.h:' lines.
sed -ne '2,${
s/^ *//
s/ \\*$//
s/$/:/
p
}' "$tmpdepfile" >> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile" "$tmpdepfile2"
;;
tru64)
# The Tru64 compiler uses -MD to generate dependencies as a side
# effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'.
# At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
# dependencies in 'foo.d' instead, so we check for that too.
# Subdirectories are respected.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
# Libtool generates 2 separate objects for the 2 libraries. These
# two compilations output dependencies in $dir.libs/$base.o.d and
# in $dir$base.o.d. We have to check for both files, because
# one of the two compilations can be disabled. We should prefer
# $dir$base.o.d over $dir.libs/$base.o.d because the latter is
# automatically cleaned when .libs/ is deleted, while ignoring
# the former would cause a distcleancheck panic.
tmpdepfile1=$dir$base.o.d # libtool 1.5
tmpdepfile2=$dir.libs/$base.o.d # Likewise.
tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504
"$@" -Wc,-MD
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
tmpdepfile3=$dir$base.d
"$@" -MD
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
# Same post-processing that is required for AIX mode.
aix_post_process_depfile
;;
msvc7)
if test "$libtool" = yes; then
showIncludes=-Wc,-showIncludes
else
showIncludes=-showIncludes
fi
"$@" $showIncludes > "$tmpdepfile"
stat=$?
grep -v '^Note: including file: ' "$tmpdepfile"
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The first sed program below extracts the file names and escapes
# backslashes for cygpath. The second sed program outputs the file
# name when reading, but also accumulates all include files in the
# hold buffer in order to output them again at the end. This only
# works with sed implementations that can handle large buffers.
sed < "$tmpdepfile" -n '
/^Note: including file: *\(.*\)/ {
s//\1/
s/\\/\\\\/g
p
}' | $cygpath_u | sort -u | sed -n '
s/ /\\ /g
s/\(.*\)/'"$tab"'\1 \\/p
s/.\(.*\) \\/\1:/
H
$ {
s/.*/'"$tab"'/
G
p
}' >> "$depfile"
echo >> "$depfile" # make sure the fragment doesn't end with a backslash
rm -f "$tmpdepfile"
;;
msvc7msys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
#nosideeffect)
# This comment above is used by automake to tell side-effect
# dependency tracking mechanisms from slower ones.
dashmstdout)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
test -z "$dashmflag" && dashmflag=-M
# Require at least two characters before searching for ':'
# in the target name. This is to cope with DOS-style filenames:
# a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise.
"$@" $dashmflag |
sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this sed invocation
# correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
dashXmstdout)
# This case only exists to satisfy depend.m4. It is never actually
# run, as this mode is specially recognized in the preamble.
exit 1
;;
makedepend)
"$@" || exit $?
# Remove any Libtool call
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# X makedepend
shift
cleared=no eat=no
for arg
do
case $cleared in
no)
set ""; shift
cleared=yes ;;
esac
if test $eat = yes; then
eat=no
continue
fi
case "$arg" in
-D*|-I*)
set fnord "$@" "$arg"; shift ;;
# Strip any option that makedepend may not understand. Remove
# the object too, otherwise makedepend will parse it as a source file.
-arch)
eat=yes ;;
-*|$object)
;;
*)
set fnord "$@" "$arg"; shift ;;
esac
done
obj_suffix=`echo "$object" | sed 's/^.*\././'`
touch "$tmpdepfile"
${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
rm -f "$depfile"
# makedepend may prepend the VPATH from the source file name to the object.
# No need to regex-escape $object, excess matching of '.' is harmless.
sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process the last invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed '1,2d' "$tmpdepfile" \
| tr ' ' "$nl" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile" "$tmpdepfile".bak
;;
cpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
"$@" -E \
| sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
-e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
| sed '$ s: \\$::' > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
cat < "$tmpdepfile" >> "$depfile"
sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvisualcpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
IFS=" "
for arg
do
case "$arg" in
-o)
shift
;;
$object)
shift
;;
"-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
set fnord "$@"
shift
shift
;;
*)
set fnord "$@" "$arg"
shift
shift
;;
esac
done
"$@" -E 2>/dev/null |
sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile"
echo "$tab" >> "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvcmsys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
none)
exec "$@"
;;
*)
echo "Unknown depmode $depmode" 1>&2
exit 1
;;
esac
exit 0
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# End:

533
install-sh Executable file
View File

@ -0,0 +1,533 @@
#!/bin/sh
# install - install a program, script, or datafile
scriptversion=2020-11-14.01; # UTC
# This originates from X11R5 (mit/util/scripts/install.sh), which was
# later released in X11R6 (xc/config/util/install.sh) with the
# following copyright and license.
#
# Copyright (C) 1994 X Consortium
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of the X Consortium shall not
# be used in advertising or otherwise to promote the sale, use or other deal-
# ings in this Software without prior written authorization from the X Consor-
# tium.
#
#
# FSF changes to this file are in the public domain.
#
# Calling this script install-sh is preferred over install.sh, to prevent
# 'make' implicit rules from creating a file called install from it
# when there is no Makefile.
#
# This script is compatible with the BSD install script, but was written
# from scratch.
tab=' '
nl='
'
IFS=" $tab$nl"
# Set DOITPROG to "echo" to test this script.
doit=${DOITPROG-}
doit_exec=${doit:-exec}
# Put in absolute file names if you don't have them in your path;
# or use environment vars.
chgrpprog=${CHGRPPROG-chgrp}
chmodprog=${CHMODPROG-chmod}
chownprog=${CHOWNPROG-chown}
cmpprog=${CMPPROG-cmp}
cpprog=${CPPROG-cp}
mkdirprog=${MKDIRPROG-mkdir}
mvprog=${MVPROG-mv}
rmprog=${RMPROG-rm}
stripprog=${STRIPPROG-strip}
posix_mkdir=
# Desired mode of installed file.
mode=0755
# Create dirs (including intermediate dirs) using mode 755.
# This is like GNU 'install' as of coreutils 8.32 (2020).
mkdir_umask=22
backupsuffix=
chgrpcmd=
chmodcmd=$chmodprog
chowncmd=
mvcmd=$mvprog
rmcmd="$rmprog -f"
stripcmd=
src=
dst=
dir_arg=
dst_arg=
copy_on_change=false
is_target_a_directory=possibly
usage="\
Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
or: $0 [OPTION]... SRCFILES... DIRECTORY
or: $0 [OPTION]... -t DIRECTORY SRCFILES...
or: $0 [OPTION]... -d DIRECTORIES...
In the 1st form, copy SRCFILE to DSTFILE.
In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
In the 4th, create DIRECTORIES.
Options:
--help display this help and exit.
--version display version info and exit.
-c (ignored)
-C install only if different (preserve data modification time)
-d create directories instead of installing files.
-g GROUP $chgrpprog installed files to GROUP.
-m MODE $chmodprog installed files to MODE.
-o USER $chownprog installed files to USER.
-p pass -p to $cpprog.
-s $stripprog installed files.
-S SUFFIX attempt to back up existing files, with suffix SUFFIX.
-t DIRECTORY install into DIRECTORY.
-T report an error if DSTFILE is a directory.
Environment variables override the default commands:
CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG
RMPROG STRIPPROG
By default, rm is invoked with -f; when overridden with RMPROG,
it's up to you to specify -f if you want it.
If -S is not specified, no backups are attempted.
Email bug reports to bug-automake@gnu.org.
Automake home page: https://www.gnu.org/software/automake/
"
while test $# -ne 0; do
case $1 in
-c) ;;
-C) copy_on_change=true;;
-d) dir_arg=true;;
-g) chgrpcmd="$chgrpprog $2"
shift;;
--help) echo "$usage"; exit $?;;
-m) mode=$2
case $mode in
*' '* | *"$tab"* | *"$nl"* | *'*'* | *'?'* | *'['*)
echo "$0: invalid mode: $mode" >&2
exit 1;;
esac
shift;;
-o) chowncmd="$chownprog $2"
shift;;
-p) cpprog="$cpprog -p";;
-s) stripcmd=$stripprog;;
-S) backupsuffix="$2"
shift;;
-t)
is_target_a_directory=always
dst_arg=$2
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
shift;;
-T) is_target_a_directory=never;;
--version) echo "$0 $scriptversion"; exit $?;;
--) shift
break;;
-*) echo "$0: invalid option: $1" >&2
exit 1;;
*) break;;
esac
shift
done
# We allow the use of options -d and -T together, by making -d
# take the precedence; this is for compatibility with GNU install.
if test -n "$dir_arg"; then
if test -n "$dst_arg"; then
echo "$0: target directory not allowed when installing a directory." >&2
exit 1
fi
fi
if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then
# When -d is used, all remaining arguments are directories to create.
# When -t is used, the destination is already specified.
# Otherwise, the last argument is the destination. Remove it from $@.
for arg
do
if test -n "$dst_arg"; then
# $@ is not empty: it contains at least $arg.
set fnord "$@" "$dst_arg"
shift # fnord
fi
shift # arg
dst_arg=$arg
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
done
fi
if test $# -eq 0; then
if test -z "$dir_arg"; then
echo "$0: no input file specified." >&2
exit 1
fi
# It's OK to call 'install-sh -d' without argument.
# This can happen when creating conditional directories.
exit 0
fi
if test -z "$dir_arg"; then
if test $# -gt 1 || test "$is_target_a_directory" = always; then
if test ! -d "$dst_arg"; then
echo "$0: $dst_arg: Is not a directory." >&2
exit 1
fi
fi
fi
if test -z "$dir_arg"; then
do_exit='(exit $ret); exit $ret'
trap "ret=129; $do_exit" 1
trap "ret=130; $do_exit" 2
trap "ret=141; $do_exit" 13
trap "ret=143; $do_exit" 15
# Set umask so as not to create temps with too-generous modes.
# However, 'strip' requires both read and write access to temps.
case $mode in
# Optimize common cases.
*644) cp_umask=133;;
*755) cp_umask=22;;
*[0-7])
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw='% 200'
fi
cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;;
*)
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw=,u+rw
fi
cp_umask=$mode$u_plus_rw;;
esac
fi
for src
do
# Protect names problematic for 'test' and other utilities.
case $src in
-* | [=\(\)!]) src=./$src;;
esac
if test -n "$dir_arg"; then
dst=$src
dstdir=$dst
test -d "$dstdir"
dstdir_status=$?
# Don't chown directories that already exist.
if test $dstdir_status = 0; then
chowncmd=""
fi
else
# Waiting for this to be detected by the "$cpprog $src $dsttmp" command
# might cause directories to be created, which would be especially bad
# if $src (and thus $dsttmp) contains '*'.
if test ! -f "$src" && test ! -d "$src"; then
echo "$0: $src does not exist." >&2
exit 1
fi
if test -z "$dst_arg"; then
echo "$0: no destination specified." >&2
exit 1
fi
dst=$dst_arg
# If destination is a directory, append the input filename.
if test -d "$dst"; then
if test "$is_target_a_directory" = never; then
echo "$0: $dst_arg: Is a directory" >&2
exit 1
fi
dstdir=$dst
dstbase=`basename "$src"`
case $dst in
*/) dst=$dst$dstbase;;
*) dst=$dst/$dstbase;;
esac
dstdir_status=0
else
dstdir=`dirname "$dst"`
test -d "$dstdir"
dstdir_status=$?
fi
fi
case $dstdir in
*/) dstdirslash=$dstdir;;
*) dstdirslash=$dstdir/;;
esac
obsolete_mkdir_used=false
if test $dstdir_status != 0; then
case $posix_mkdir in
'')
# With -d, create the new directory with the user-specified mode.
# Otherwise, rely on $mkdir_umask.
if test -n "$dir_arg"; then
mkdir_mode=-m$mode
else
mkdir_mode=
fi
posix_mkdir=false
# The $RANDOM variable is not portable (e.g., dash). Use it
# here however when possible just to lower collision chance.
tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$
trap '
ret=$?
rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" 2>/dev/null
exit $ret
' 0
# Because "mkdir -p" follows existing symlinks and we likely work
# directly in world-writeable /tmp, make sure that the '$tmpdir'
# directory is successfully created first before we actually test
# 'mkdir -p'.
if (umask $mkdir_umask &&
$mkdirprog $mkdir_mode "$tmpdir" &&
exec $mkdirprog $mkdir_mode -p -- "$tmpdir/a/b") >/dev/null 2>&1
then
if test -z "$dir_arg" || {
# Check for POSIX incompatibilities with -m.
# HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or
# other-writable bit of parent directory when it shouldn't.
# FreeBSD 6.1 mkdir -m -p sets mode of existing directory.
test_tmpdir="$tmpdir/a"
ls_ld_tmpdir=`ls -ld "$test_tmpdir"`
case $ls_ld_tmpdir in
d????-?r-*) different_mode=700;;
d????-?--*) different_mode=755;;
*) false;;
esac &&
$mkdirprog -m$different_mode -p -- "$test_tmpdir" && {
ls_ld_tmpdir_1=`ls -ld "$test_tmpdir"`
test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1"
}
}
then posix_mkdir=:
fi
rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir"
else
# Remove any dirs left behind by ancient mkdir implementations.
rmdir ./$mkdir_mode ./-p ./-- "$tmpdir" 2>/dev/null
fi
trap '' 0;;
esac
if
$posix_mkdir && (
umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir"
)
then :
else
# mkdir does not conform to POSIX,
# or it failed possibly due to a race condition. Create the
# directory the slow way, step by step, checking for races as we go.
case $dstdir in
/*) prefix='/';;
[-=\(\)!]*) prefix='./';;
*) prefix='';;
esac
oIFS=$IFS
IFS=/
set -f
set fnord $dstdir
shift
set +f
IFS=$oIFS
prefixes=
for d
do
test X"$d" = X && continue
prefix=$prefix$d
if test -d "$prefix"; then
prefixes=
else
if $posix_mkdir; then
(umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break
# Don't fail if two instances are running concurrently.
test -d "$prefix" || exit 1
else
case $prefix in
*\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;;
*) qprefix=$prefix;;
esac
prefixes="$prefixes '$qprefix'"
fi
fi
prefix=$prefix/
done
if test -n "$prefixes"; then
# Don't fail if two instances are running concurrently.
(umask $mkdir_umask &&
eval "\$doit_exec \$mkdirprog $prefixes") ||
test -d "$dstdir" || exit 1
obsolete_mkdir_used=true
fi
fi
fi
if test -n "$dir_arg"; then
{ test -z "$chowncmd" || $doit $chowncmd "$dst"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } &&
{ test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false ||
test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1
else
# Make a couple of temp file names in the proper directory.
dsttmp=${dstdirslash}_inst.$$_
rmtmp=${dstdirslash}_rm.$$_
# Trap to clean up those temp files at exit.
trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0
# Copy the file name to the temp name.
(umask $cp_umask &&
{ test -z "$stripcmd" || {
# Create $dsttmp read-write so that cp doesn't create it read-only,
# which would cause strip to fail.
if test -z "$doit"; then
: >"$dsttmp" # No need to fork-exec 'touch'.
else
$doit touch "$dsttmp"
fi
}
} &&
$doit_exec $cpprog "$src" "$dsttmp") &&
# and set any options; do chmod last to preserve setuid bits.
#
# If any of these fail, we abort the whole thing. If we want to
# ignore errors from any of these, just make sure not to ignore
# errors from the above "$doit $cpprog $src $dsttmp" command.
#
{ test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } &&
{ test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } &&
{ test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } &&
# If -C, don't bother to copy if it wouldn't change the file.
if $copy_on_change &&
old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` &&
new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` &&
set -f &&
set X $old && old=:$2:$4:$5:$6 &&
set X $new && new=:$2:$4:$5:$6 &&
set +f &&
test "$old" = "$new" &&
$cmpprog "$dst" "$dsttmp" >/dev/null 2>&1
then
rm -f "$dsttmp"
else
# If $backupsuffix is set, and the file being installed
# already exists, attempt a backup. Don't worry if it fails,
# e.g., if mv doesn't support -f.
if test -n "$backupsuffix" && test -f "$dst"; then
$doit $mvcmd -f "$dst" "$dst$backupsuffix" 2>/dev/null
fi
# Rename the file to the real destination.
$doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null ||
# The rename failed, perhaps because mv can't rename something else
# to itself, or perhaps because mv is so ancient that it does not
# support -f.
{
# Now remove or move aside any old file at destination location.
# We try this two ways since rm can't unlink itself on some
# systems and the destination file might be busy for other
# reasons. In this case, the final cleanup might fail but the new
# file should still install successfully.
{
test ! -f "$dst" ||
$doit $rmcmd "$dst" 2>/dev/null ||
{ $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null &&
{ $doit $rmcmd "$rmtmp" 2>/dev/null; :; }
} ||
{ echo "$0: cannot unlink or rename $dst" >&2
(exit 1); exit 1
}
} &&
# Now rename the file to the real destination.
$doit $mvcmd "$dsttmp" "$dst"
}
fi || exit 1
trap '' 0
fi
done

207
missing Executable file
View File

@ -0,0 +1,207 @@
#! /bin/sh
# Common wrapper for a few potentially missing GNU programs.
scriptversion=2018-03-07.03; # UTC
# Copyright (C) 1996-2021 Free Software Foundation, Inc.
# Originally written by Fran,cois Pinard <pinard@iro.umontreal.ca>, 1996.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
if test $# -eq 0; then
echo 1>&2 "Try '$0 --help' for more information"
exit 1
fi
case $1 in
--is-lightweight)
# Used by our autoconf macros to check whether the available missing
# script is modern enough.
exit 0
;;
--run)
# Back-compat with the calling convention used by older automake.
shift
;;
-h|--h|--he|--hel|--help)
echo "\
$0 [OPTION]... PROGRAM [ARGUMENT]...
Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due
to PROGRAM being missing or too old.
Options:
-h, --help display this help and exit
-v, --version output version information and exit
Supported PROGRAM values:
aclocal autoconf autoheader autom4te automake makeinfo
bison yacc flex lex help2man
Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and
'g' are ignored when checking the name.
Send bug reports to <bug-automake@gnu.org>."
exit $?
;;
-v|--v|--ve|--ver|--vers|--versi|--versio|--version)
echo "missing $scriptversion (GNU Automake)"
exit $?
;;
-*)
echo 1>&2 "$0: unknown '$1' option"
echo 1>&2 "Try '$0 --help' for more information"
exit 1
;;
esac
# Run the given program, remember its exit status.
"$@"; st=$?
# If it succeeded, we are done.
test $st -eq 0 && exit 0
# Also exit now if we it failed (or wasn't found), and '--version' was
# passed; such an option is passed most likely to detect whether the
# program is present and works.
case $2 in --version|--help) exit $st;; esac
# Exit code 63 means version mismatch. This often happens when the user
# tries to use an ancient version of a tool on a file that requires a
# minimum version.
if test $st -eq 63; then
msg="probably too old"
elif test $st -eq 127; then
# Program was missing.
msg="missing on your system"
else
# Program was found and executed, but failed. Give up.
exit $st
fi
perl_URL=https://www.perl.org/
flex_URL=https://github.com/westes/flex
gnu_software_URL=https://www.gnu.org/software
program_details ()
{
case $1 in
aclocal|automake)
echo "The '$1' program is part of the GNU Automake package:"
echo "<$gnu_software_URL/automake>"
echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/autoconf>"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
autoconf|autom4te|autoheader)
echo "The '$1' program is part of the GNU Autoconf package:"
echo "<$gnu_software_URL/autoconf/>"
echo "It also requires GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
esac
}
give_advice ()
{
# Normalize program name to check for.
normalized_program=`echo "$1" | sed '
s/^gnu-//; t
s/^gnu//; t
s/^g//; t'`
printf '%s\n' "'$1' is $msg."
configure_deps="'configure.ac' or m4 files included by 'configure.ac'"
case $normalized_program in
autoconf*)
echo "You should only need it if you modified 'configure.ac',"
echo "or m4 files included by it."
program_details 'autoconf'
;;
autoheader*)
echo "You should only need it if you modified 'acconfig.h' or"
echo "$configure_deps."
program_details 'autoheader'
;;
automake*)
echo "You should only need it if you modified 'Makefile.am' or"
echo "$configure_deps."
program_details 'automake'
;;
aclocal*)
echo "You should only need it if you modified 'acinclude.m4' or"
echo "$configure_deps."
program_details 'aclocal'
;;
autom4te*)
echo "You might have modified some maintainer files that require"
echo "the 'autom4te' program to be rebuilt."
program_details 'autom4te'
;;
bison*|yacc*)
echo "You should only need it if you modified a '.y' file."
echo "You may want to install the GNU Bison package:"
echo "<$gnu_software_URL/bison/>"
;;
lex*|flex*)
echo "You should only need it if you modified a '.l' file."
echo "You may want to install the Fast Lexical Analyzer package:"
echo "<$flex_URL>"
;;
help2man*)
echo "You should only need it if you modified a dependency" \
"of a man page."
echo "You may want to install the GNU Help2man package:"
echo "<$gnu_software_URL/help2man/>"
;;
makeinfo*)
echo "You should only need it if you modified a '.texi' file, or"
echo "any other file indirectly affecting the aspect of the manual."
echo "You might want to install the Texinfo package:"
echo "<$gnu_software_URL/texinfo/>"
echo "The spurious makeinfo call might also be the consequence of"
echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might"
echo "want to install GNU make:"
echo "<$gnu_software_URL/make/>"
;;
*)
echo "You might have modified some files without having the proper"
echo "tools for further handling them. Check the 'README' file, it"
echo "often tells you about the needed prerequisites for installing"
echo "this package. You may also peek at any GNU archive site, in"
echo "case some other package contains this missing '$1' program."
;;
esac
}
give_advice "$1" | sed -e '1s/^/WARNING: /' \
-e '2,$s/^/ /' >&2
# Propagate the correct exit status (expected to be 127 for a program
# not found, 63 for a program that failed due to version mismatch).
exit $st

View File

@ -163,16 +163,12 @@ static int Meta_read(Cache *cf)
return EIO;
}
fread(&cf->time, sizeof(long), 1, fp);
fread(&cf->content_length, sizeof(off_t), 1, fp);
fread(&cf->blksz, sizeof(int), 1, fp);
fread(&cf->segbc, sizeof(long), 1, fp);
/*
* Error checking for fread
*/
if (ferror(fp)) {
lprintf(error, "error reading core metadata!\n");
if ( 1 != fread(&cf->time, sizeof(long), 1, fp) ||
1 != fread(&cf->content_length, sizeof(off_t), 1, fp) ||
1 != fread(&cf->blksz, sizeof(int), 1, fp) ||
1 != fread(&cf->segbc, sizeof(long), 1, fp) ||
ferror(fp) ) {
lprintf(error, "error reading core metadata %s!\n", cf->path);
return EIO;
}
@ -545,7 +541,9 @@ static void Cache_free(Cache *cf)
static int Cache_exist(const char *fn)
{
char *metafn = path_append(META_DIR, fn);
lprintf(debug, "metafn: %s\n", metafn);
char *datafn = path_append(DATA_DIR, fn);
lprintf(debug, "datafn: %s\n", datafn);
/*
* access() returns 0 on success
*/
@ -553,15 +551,17 @@ static int Cache_exist(const char *fn)
int no_data = access(datafn, F_OK);
if (no_meta ^ no_data) {
lprintf(warning, "Cache file partially missing.\n");
if (no_meta) {
lprintf(warning, "Cache file partially missing.\n");
lprintf(debug, "Unlinking datafn: %s\n", datafn);
if (unlink(datafn)) {
lprintf(error, "unlink(): %s\n", strerror(errno));
lprintf(fatal, "unlink(): %s\n", strerror(errno));
}
}
if (no_data) {
lprintf(debug, "Unlinking metafn: %s\n", metafn);
if (unlink(metafn)) {
lprintf(error, "unlink(): %s\n", strerror(errno));
lprintf(fatal, "unlink(): %s\n", strerror(errno));
}
}
}
@ -670,6 +670,7 @@ int Cache_create(const char *path)
Link *this_link = path_to_Link(path);
char *fn = "__UNINITIALISED__";
if (CONFIG.mode == NORMAL) {
fn = curl_easy_unescape(NULL,
this_link->f_url + ROOT_LINK_OFFSET, 0,
@ -714,8 +715,14 @@ int Cache_create(const char *path)
int res = Cache_exist(fn);
if (res) {
lprintf(fatal, "Cache file creation failed for %s\n", path);
}
if (CONFIG.mode == NORMAL) {
curl_free(fn);
} else if (CONFIG.mode == SONIC) {
curl_free(fn);
}
return res;
@ -949,7 +956,7 @@ static void *Cache_bgdl(void *arg)
cf->next_dl_offset);
if (recv < 0) {
lprintf(error, "thread %x received %ld bytes, \
which does't make sense\n", pthread_self(), recv);
which doesn't make sense\n", pthread_self(), recv);
}
if ((recv == cf->blksz) ||
@ -1030,7 +1037,7 @@ Cache_read(Cache *cf, char *const output_buf, const off_t len,
dl_offset);
if (recv < 0) {
lprintf(error, "thread %x received %ld bytes, \
which does't make sense\n", pthread_self(), recv);
which doesn't make sense\n", pthread_self(), recv);
}
/*
* check if we have received enough data, write it to the disk

View File

@ -53,6 +53,8 @@ void Config_init(void)
CONFIG.insecure_tls = 0;
CONFIG.refresh_timeout = DEFAULT_REFRESH_TIMEOUT;
/*--------------- Cache related ---------------*/
CONFIG.cache_enabled = 0;

View File

@ -23,6 +23,11 @@
*/
#define DEFAULT_NETWORK_MAX_CONNS 10
/**
* \brief The default refresh_timeout
*/
#define DEFAULT_REFRESH_TIMEOUT 3600
/**
* \brief Operation modes
*/
@ -67,6 +72,8 @@ typedef struct {
int insecure_tls;
/** \brief Server certificate file */
char *cafile;
/** \brief Refresh directory listing after refresh_timeout seconds*/
int refresh_timeout;
/*--------------- Cache related ---------------*/
/** \brief Whether cache mode is enabled */
int cache_enabled;

View File

@ -44,7 +44,7 @@ static int fs_getattr(const char *path, struct stat *stbuf)
if (!link) {
return -ENOENT;
}
struct timespec spec;
struct timespec spec = { 0 };
spec.tv_sec = link->time;
#if defined(__APPLE__) && defined(__MACH__)
stbuf->st_mtimespec = spec;
@ -95,23 +95,29 @@ static int fs_open(const char *path, struct fuse_file_info *fi)
if (!link) {
return -ENOENT;
}
lprintf(debug, "%s found.\n", path);
if ((fi->flags & O_RDWR) != O_RDONLY) {
return -EROFS;
}
if (CACHE_SYSTEM_INIT) {
lprintf(debug, "Cache_open(%s);\n", path);
fi->fh = (uint64_t) Cache_open(path);
if (!fi->fh) {
/*
* The link clearly exists, the cache cannot be opened, attempt
* cache creation
*/
lprintf(debug, "Cache_delete(%s);\n", path);
Cache_delete(path);
lprintf(debug, "Cache_create(%s);\n", path);
Cache_create(path);
lprintf(debug, "Cache_open(%s);\n", path);
fi->fh = (uint64_t) Cache_open(path);
/*
* The cache definitely cannot be opened for some reason.
*/
if (!fi->fh) {
lprintf(fatal, "Cache file creation failure for %s.\n", path);
return -ENOENT;
}
}
@ -138,15 +144,18 @@ fs_readdir(const char *path, void *buf, fuse_fill_dir_t dir_add,
(void) fi;
LinkTable *linktbl;
if (!strcmp(path, "/")) {
linktbl = ROOT_LINK_TBL;
} else {
linktbl = path_to_Link_LinkTable_new(path);
if (!linktbl) {
return -ENOENT;
}
#ifdef DEBUG
static int j = 0;
lprintf(debug, "!!!!Calling fs_readdir for the %d time!!!!\n", j);
j++;
#endif
linktbl = path_to_Link_LinkTable_new(path);
if (!linktbl) {
return -ENOENT;
}
/*
* start adding the links
*/

View File

@ -27,6 +27,7 @@ int ROOT_LINK_OFFSET = 0;
* effectively gives LinkTable generation priority over file transfer.
*/
static pthread_mutex_t link_lock;
static void make_link_relative(const char *page_url, char *link_url);
/**
* \brief create a new Link
@ -36,6 +37,7 @@ static Link *Link_new(const char *linkname, LinkType type)
Link *link = CALLOC(1, sizeof(Link));
strncpy(link->linkname, linkname, MAX_FILENAME_LEN);
strncpy(link->linkpath, linkname, MAX_FILENAME_LEN);
link->type = type;
/*
@ -51,6 +53,7 @@ static Link *Link_new(const char *linkname, LinkType type)
static CURL *Link_to_curl(Link *link)
{
lprintf(debug, "%s\n", link->f_url);
CURL *curl = curl_easy_init();
if (!curl) {
lprintf(fatal, "curl_easy_init() failed!\n");
@ -184,6 +187,7 @@ static CURL *Link_to_curl(Link *link)
static void Link_req_file_stat(Link *this_link)
{
lprintf(debug, "%s\n", this_link->f_url);
CURL *curl = Link_to_curl(this_link);
CURLcode ret = curl_easy_setopt(curl, CURLOPT_NOBODY, 1);
if (ret) {
@ -259,7 +263,8 @@ static void LinkTable_uninitialised_fill(LinkTable *linktbl)
*/
static LinkTable *single_LinkTable_new(const char *url)
{
char *ptr = strrchr(url, '/') + 1;
char *orig_ptr = strrchr(url, '/') + 1;
char *ptr = curl_easy_unescape(NULL, orig_ptr, 0, NULL);
LinkTable *linktbl = LinkTable_alloc(url);
Link *link = Link_new(ptr, LINK_UNINITIALISED_FILE);
strncpy(link->f_url, url, MAX_FILENAME_LEN);
@ -269,26 +274,20 @@ static LinkTable *single_LinkTable_new(const char *url)
return linktbl;
}
LinkTable *LinkSystem_init(const char *raw_url)
LinkTable *LinkSystem_init(const char *url)
{
if (pthread_mutex_init(&link_lock, NULL)) {
lprintf(error, "link_lock initialisation failed!\n");
}
/*
* Remove excess '/' if it is there
*/
char *url = strdup(raw_url);
int url_len = strnlen(url, MAX_PATH_LEN) - 1;
if (url[url_len] == '/') {
url[url_len] = '\0';
}
/*
* --------- Set the length of the root link -----------
*/
/*
* This is where the '/' should be
*/
ROOT_LINK_OFFSET = strnlen(url, MAX_PATH_LEN);
ROOT_LINK_OFFSET = strnlen(url, MAX_PATH_LEN) -
((url[url_len] == '/') ? 1 : 0);
/*
* --------------------- Enable cache system --------------------
@ -319,7 +318,6 @@ LinkTable *LinkSystem_init(const char *raw_url)
} else {
lprintf(fatal, "Invalid CONFIG.mode\n");
}
FREE(url);
return ROOT_LINK_TBL;
}
@ -388,7 +386,8 @@ static int linknames_equal(char *linkname, const char *linkname_new)
* Shamelessly copied and pasted from:
* https://github.com/google/gumbo-parser/blob/master/examples/find_links.cc
*/
static void HTML_to_LinkTable(GumboNode *node, LinkTable *linktbl)
static void HTML_to_LinkTable(const char *url, GumboNode *node,
LinkTable *linktbl)
{
if (node->type != GUMBO_NODE_ELEMENT) {
return;
@ -397,23 +396,21 @@ static void HTML_to_LinkTable(GumboNode *node, LinkTable *linktbl)
if (node->v.element.tag == GUMBO_TAG_A &&
(href =
gumbo_get_attribute(&node->v.element.attributes, "href"))) {
char *link_url = (char *) href->value;
make_link_relative(url, link_url);
/*
* if it is valid, copy the link onto the heap
*/
LinkType type = linkname_to_LinkType(href->value);
LinkType type = linkname_to_LinkType(link_url);
/*
* We also check if the link being added is the same as the last link.
* This is to prevent duplicated link, if an Apache server has the
* IconsAreLinks option.
*/
size_t comp_len = strnlen(href->value, MAX_FILENAME_LEN);
if (type == LINK_DIR) {
comp_len--;
}
if (((type == LINK_DIR) || (type == LINK_UNINITIALISED_FILE)) &&
!linknames_equal(linktbl->links[linktbl->num - 1]->linkname,
href->value)) {
LinkTable_add(linktbl, Link_new(href->value, type));
link_url)) {
LinkTable_add(linktbl, Link_new(link_url, type));
}
}
/*
@ -421,7 +418,7 @@ static void HTML_to_LinkTable(GumboNode *node, LinkTable *linktbl)
*/
GumboVector *children = &node->v.element.children;
for (size_t i = 0; i < children->length; ++i) {
HTML_to_LinkTable((GumboNode *) children->data[i], linktbl);
HTML_to_LinkTable(url, (GumboNode *) children->data[i], linktbl);
}
return;
}
@ -435,9 +432,9 @@ void Link_set_file_stat(Link *this_link, CURL *curl)
lprintf(error, "%s", curl_easy_strerror(ret));
}
if (http_resp == HTTP_OK) {
double cl = 0;
curl_off_t cl = 0;
ret =
curl_easy_getinfo(curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &cl);
curl_easy_getinfo(curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD_T, &cl);
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
@ -466,19 +463,35 @@ void Link_set_file_stat(Link *this_link, CURL *curl)
static void LinkTable_fill(LinkTable *linktbl)
{
Link *head_link = linktbl->links[0];
lprintf(debug, "Filling %s\n", head_link->f_url);
for (int i = 1; i < linktbl->num; i++) {
Link *this_link = linktbl->links[i];
char *url;
url = path_append(head_link->f_url, this_link->linkname);
/* Some web sites use characters in their href attributes that really
shouldn't be in their href attributes, most commonly spaces. And
some web sites _do_ properly encode their href attributes. So we
first unescape the link path, and then we escape it, so that curl
will definitely be happy with it (e.g., curl won't accept URLs with
spaces in them!). If we only escaped it, and there were already
encoded characters in it, then that would break the link. */
char *unescaped_path = curl_easy_unescape(NULL, this_link->linkpath, 0,
NULL);
char *escaped_path = curl_easy_escape(NULL, unescaped_path, 0);
curl_free(unescaped_path);
/* Our code does the wrong thing if there's a trailing slash that's been
replaced with %2F, which curl_easy_escape does, God bless it, so if
it did that then let's put it back. */
int escaped_len = strlen(escaped_path);
if (escaped_len >= 3 && !strcmp(escaped_path + escaped_len - 3, "%2F"))
strcpy(escaped_path + escaped_len - 3, "/");
char *url = path_append(head_link->f_url, escaped_path);
curl_free(escaped_path);
strncpy(this_link->f_url, url, MAX_PATH_LEN);
FREE(url);
char *unescaped_linkname;
CURL *c = curl_easy_init();
unescaped_linkname = curl_easy_unescape(c, this_link->linkname,
unescaped_linkname = curl_easy_unescape(NULL, this_link->linkname,
0, NULL);
strncpy(this_link->linkname, unescaped_linkname, MAX_FILENAME_LEN);
curl_free(unescaped_linkname);
curl_easy_cleanup(c);
}
LinkTable_uninitialised_fill(linktbl);
}
@ -501,11 +514,14 @@ static void LinkTable_invalid_reset(LinkTable *linktbl)
void LinkTable_free(LinkTable *linktbl)
{
for (int i = 0; i < linktbl->num; i++) {
FREE(linktbl->links[i]);
if (linktbl) {
for (int i = 0; i < linktbl->num; i++) {
LinkTable_free(linktbl->links[i]->next_table);
FREE(linktbl->links[i]);
}
FREE(linktbl->links);
FREE(linktbl);
}
FREE(linktbl->links);
FREE(linktbl);
}
void LinkTable_print(LinkTable *linktbl)
@ -538,6 +554,10 @@ void LinkTable_print(LinkTable *linktbl)
LinkTable *LinkTable_alloc(const char *url)
{
LinkTable *linktbl = CALLOC(1, sizeof(LinkTable));
linktbl->num = 0;
linktbl->index_time = 0;
linktbl->links = NULL;
/*
* populate the base URL
@ -551,78 +571,81 @@ LinkTable *LinkTable_alloc(const char *url)
LinkTable *LinkTable_new(const char *url)
{
LinkTable *linktbl = LinkTable_alloc(url);
/*
* start downloading the base URL
*/
TransferStruct ts = Link_download_full(linktbl->links[0]);
if (ts.curr_size == 0) {
LinkTable_free(linktbl);
return NULL;
}
/*
* Otherwise parsed the received data
*/
GumboOutput *output = gumbo_parse(ts.data);
HTML_to_LinkTable(output->root, linktbl);
gumbo_destroy_output(&kGumboDefaultOptions, output);
FREE(ts.data);
int skip_fill = 0;
char *unescaped_path;
CURL *c = curl_easy_init();
unescaped_path =
curl_easy_unescape(c, url + ROOT_LINK_OFFSET, 0, NULL);
curl_easy_unescape(NULL, url + ROOT_LINK_OFFSET, 0, NULL);
LinkTable *linktbl = NULL;
/*
* Attempt to load the LinkTable from the disk.
*/
if (CACHE_SYSTEM_INIT) {
CacheDir_create(unescaped_path);
LinkTable *disk_linktbl;
disk_linktbl = LinkTable_disk_open(unescaped_path);
if (disk_linktbl) {
/*
* Check if we need to update the link table
* Check if the LinkTable needs to be refreshed based on timeout.
*/
lprintf(debug,
"disk_linktbl->num: %d, linktbl->num: %d\n",
disk_linktbl->num, linktbl->num);
if (disk_linktbl->num == linktbl->num) {
LinkTable_free(linktbl);
linktbl = disk_linktbl;
skip_fill = 1;
} else {
time_t time_now = time(NULL);
if (time_now - disk_linktbl->index_time > CONFIG.refresh_timeout) {
lprintf(info, "time_now: %d, index_time: %d\n", time_now,
disk_linktbl->index_time);
lprintf(info, "diff: %d, limit: %d\n",
time_now - disk_linktbl->index_time,
CONFIG.refresh_timeout);
LinkTable_free(disk_linktbl);
} else {
linktbl = disk_linktbl;
}
}
}
if (!skip_fill) {
/*
* Fill in the link table
*/
LinkTable_fill(linktbl);
} else {
/*
* Fill in the holes in the link table
*/
LinkTable_invalid_reset(linktbl);
LinkTable_uninitialised_fill(linktbl);
}
/*
* Save the link table
* Download a new LinkTable because we didn't manange to load it from the
* disk
*/
if (CACHE_SYSTEM_INIT) {
if (LinkTable_disk_save(linktbl, unescaped_path)) {
lprintf(error, "Failed to save the LinkTable!\n");
if (!linktbl) {
linktbl->index_time = time(NULL);
lprintf(debug, "linktbl->index_time: %d\n", linktbl->index_time);
/*
* start downloading the base URL
*/
TransferStruct ts = Link_download_full(linktbl->links[0]);
if (ts.curr_size == 0) {
LinkTable_free(linktbl);
return NULL;
}
/*
* Otherwise parsed the received data
*/
GumboOutput *output = gumbo_parse(ts.data);
HTML_to_LinkTable(url, output->root, linktbl);
gumbo_destroy_output(&kGumboDefaultOptions, output);
FREE(ts.data);
LinkTable_fill(linktbl);
/*
* Save the link table
*/
if (CACHE_SYSTEM_INIT) {
if (LinkTable_disk_save(linktbl, unescaped_path)) {
lprintf(error, "Failed to save the LinkTable!\n");
}
}
}
curl_free(unescaped_path);
curl_easy_cleanup(c);
#ifdef DEBUG
static int i = 0;
lprintf(debug, "!!!!Calling LinkTable_new for the %d time!!!!\n", i);
i++;
#endif
free(unescaped_path);
LinkTable_print(linktbl);
return linktbl;
}
@ -638,6 +661,14 @@ static void LinkTable_disk_delete(const char *dirn)
FREE(metadirn);
}
/* This is necessary to get the compiler on some platforms to stop
complaining about the fact that we're not using the return value of
fread, when we know we aren't and that's fine. */
static inline void ignore_value(int i)
{
(void) i;
}
int LinkTable_disk_save(LinkTable *linktbl, const char *dirn)
{
char *metadirn = path_append(META_DIR, dirn);
@ -651,16 +682,20 @@ int LinkTable_disk_save(LinkTable *linktbl, const char *dirn)
FREE(path);
return -1;
}
FREE(path);
fwrite(&linktbl->num, sizeof(int), 1, fp);
lprintf(debug, "linktbl->index_time: %d\n", linktbl->index_time);
if (fwrite(&linktbl->num, sizeof(int), 1, fp) != 1 ||
fwrite(&linktbl->index_time, sizeof(time_t), 1, fp) != 1) {
lprintf(error, "Failed to save the header of %s!\n", path);
}
FREE(path);
for (int i = 0; i < linktbl->num; i++) {
fwrite(linktbl->links[i]->linkname, sizeof(char),
MAX_FILENAME_LEN, fp);
fwrite(linktbl->links[i]->f_url, sizeof(char), MAX_PATH_LEN, fp);
fwrite(&linktbl->links[i]->type, sizeof(LinkType), 1, fp);
fwrite(&linktbl->links[i]->content_length, sizeof(size_t), 1, fp);
fwrite(&linktbl->links[i]->time, sizeof(long), 1, fp);
ignore_value(fwrite(linktbl->links[i]->linkname, sizeof(char),
MAX_FILENAME_LEN, fp));
ignore_value(fwrite(linktbl->links[i]->f_url, sizeof(char), MAX_PATH_LEN, fp));
ignore_value(fwrite(&linktbl->links[i]->type, sizeof(LinkType), 1, fp));
ignore_value(fwrite(&linktbl->links[i]->content_length, sizeof(size_t), 1, fp));
ignore_value(fwrite(&linktbl->links[i]->time, sizeof(long), 1, fp));
}
int res = 0;
@ -692,33 +727,43 @@ LinkTable *LinkTable_disk_open(const char *dirn)
FREE(metadirn);
if (!fp) {
lprintf(debug, "Linktable at %s does not exist.", path);
FREE(path);
return NULL;
}
LinkTable *linktbl = CALLOC(1, sizeof(LinkTable));
if (fread(&linktbl->num, sizeof(int), 1, fp) != 1 ||
fread(&linktbl->index_time, sizeof(time_t), 1, fp) != 1) {
lprintf(error, "Failed to read the header of %s!\n", path);
LinkTable_free(linktbl);
LinkTable_disk_delete(dirn);
FREE(path);
return NULL;
}
lprintf(debug, "linktbl->index_time: %d\n", linktbl->index_time);
fread(&linktbl->num, sizeof(int), 1, fp);
linktbl->links = CALLOC(linktbl->num, sizeof(Link *));
for (int i = 0; i < linktbl->num; i++) {
linktbl->links[i] = CALLOC(1, sizeof(Link));
fread(linktbl->links[i]->linkname, sizeof(char),
MAX_FILENAME_LEN, fp);
fread(linktbl->links[i]->f_url, sizeof(char), MAX_PATH_LEN, fp);
fread(&linktbl->links[i]->type, sizeof(LinkType), 1, fp);
fread(&linktbl->links[i]->content_length, sizeof(size_t), 1, fp);
fread(&linktbl->links[i]->time, sizeof(long), 1, fp);
/* The return values are safe to ignore here since we check them
immediately afterwards with feof() and ferror(). */
ignore_value(fread(linktbl->links[i]->linkname, sizeof(char),
MAX_FILENAME_LEN, fp));
ignore_value(fread(linktbl->links[i]->f_url, sizeof(char),
MAX_PATH_LEN, fp));
ignore_value(fread(&linktbl->links[i]->type, sizeof(LinkType), 1, fp));
ignore_value(fread(&linktbl->links[i]->content_length,
sizeof(size_t), 1, fp));
ignore_value(fread(&linktbl->links[i]->time, sizeof(long), 1, fp));
if (feof(fp)) {
/*
* reached EOF
*/
lprintf(error, "reached EOF!\n");
lprintf(error, "Corrupted LinkTable!\n");
LinkTable_free(linktbl);
LinkTable_disk_delete(dirn);
return NULL;
}
if (ferror(fp)) {
lprintf(error, "encountered ferror!\n");
lprintf(error, "Encountered ferror!\n");
LinkTable_free(linktbl);
LinkTable_disk_delete(dirn);
return NULL;
@ -728,29 +773,51 @@ LinkTable *LinkTable_disk_open(const char *dirn)
lprintf(error,
"cannot close the file pointer, %s\n", strerror(errno));
}
FREE(path);
return linktbl;
}
LinkTable *path_to_Link_LinkTable_new(const char *path)
{
Link *link = path_to_Link(path);
LinkTable *next_table = link->next_table;
Link *link = NULL;
Link *tmp_link = NULL;
Link link_cpy = { 0 };
LinkTable *next_table = NULL;
if (!strcmp(path, "/")) {
next_table = ROOT_LINK_TBL;
link_cpy = *next_table->links[0];
tmp_link = &link_cpy;
} else {
link = path_to_Link(path);
tmp_link = link;
}
if (next_table) {
}
if (!next_table) {
if (CONFIG.mode == NORMAL) {
next_table = LinkTable_new(link->f_url);
next_table = LinkTable_new(tmp_link->f_url);
} else if (CONFIG.mode == SINGLE) {
next_table = single_LinkTable_new(tmp_link->f_url);
} else if (CONFIG.mode == SONIC) {
if (!CONFIG.sonic_id3) {
next_table = sonic_LinkTable_new_index(link->sonic.id);
next_table = sonic_LinkTable_new_index(tmp_link->sonic.id);
} else {
next_table =
sonic_LinkTable_new_id3(link->sonic.depth,
link->sonic.id);
sonic_LinkTable_new_id3(tmp_link->sonic.depth,
tmp_link->sonic.id);
}
} else {
lprintf(fatal, "Invalid CONFIG.mode\n");
lprintf(fatal, "Invalid CONFIG.mode: %d\n", CONFIG.mode);
}
}
link->next_table = next_table;
if (link) {
link->next_table = next_table;
} else {
ROOT_LINK_TBL = next_table;
}
return next_table;
}
@ -916,7 +983,7 @@ static CURL *Link_download_curl_setup(Link *link, size_t req_size, off_t offset,
}
size_t start = offset;
size_t end = start + req_size;
size_t end = start + req_size - 1;
char range_str[64];
snprintf(range_str, sizeof(range_str), "%lu-%lu", start, end);
@ -964,9 +1031,9 @@ range requests\n");
if (ret) {
lprintf(error, "%s", curl_easy_strerror(ret));
}
if (!((http_resp != HTTP_OK) ||
(http_resp != HTTP_PARTIAL_CONTENT) ||
(http_resp != HTTP_RANGE_NOT_SATISFIABLE))) {
if ((http_resp != HTTP_OK) &&
(http_resp != HTTP_PARTIAL_CONTENT) &&
(http_resp != HTTP_RANGE_NOT_SATISFIABLE)) {
char *url;
curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, &url);
lprintf(warning, "Could not download %s, HTTP %ld\n", url, http_resp);
@ -996,6 +1063,13 @@ long Link_download(Link *link, char *output_buf, size_t req_size, off_t offset)
header.curr_size = 0;
header.data = NULL;
if (offset + req_size > link->content_length) {
lprintf(info,
"requested size too larger than remaining size, req_size: %lu, recv: %ld, content-length: %ld\n",
req_size, recv, link->content_length);
req_size = link->content_length - offset;
}
CURL *curl = Link_download_curl_setup(link, req_size, offset, &header, &ts);
transfer_blocking(curl);
@ -1030,3 +1104,73 @@ long path_download(const char *path, char *output_buf, size_t req_size,
return Link_download(link, output_buf, req_size, offset);
}
static void make_link_relative(const char *page_url, char *link_url)
{
/*
Some servers make the links to subdirectories absolute (in URI terms:
path-absolute), but our code expects them to be relative (in URI terms:
path-noscheme), so change the contents of link_url as needed to
accommodate that.
Also, some servers serve their links as `./name`. This is helpful to
them because it is the only way to express relative references when the
first new path segment of the target contains an unescaped colon (`:`),
eg in `./6:1-balun.png`. While stripping the ./ strictly speaking
reintroduces that ambiguity, it is of little practical concern in this
implementation, as full URI link targets are filtered by their number of
slashes anyway. In URI terms, this converts path-noscheme with a leading
`.` segment into path-noscheme or path-rootless without that segment.
*/
if (link_url[0] == '.' && link_url[1] == '/') {
memmove(link_url, link_url + 2, strlen(link_url) - 1);
return;
}
if (link_url[0] != '/') {
/* Already relative, nothing to do here!
(Full URIs, eg. `http://example.com/path`, pass through here
unmodified, but those are classified in different LinkTypes later
anyway).
*/
return;
}
/* Find the slash after the host name. */
int slashes_left_to_find = 3;
while (*page_url) {
if (*page_url == '/' && ! --slashes_left_to_find)
break;
/* N.B. This is here, rather than doing `while (*page_url++)`, because
when we're done we want the pointer to point at the final slash. */
page_url++;
}
if (slashes_left_to_find) {
if (slashes_left_to_find == 1 && ! *page_url)
/* We're at the top level of the web site and the user entered the URL
without a trailing slash. */
page_url = "/";
else
/* Well, that's odd. Let's return rather than trying to dig ourselves
deeper into whatever hole we're in. */
return;
}
/* The page URL is no longer the full page_url, it's just the part after
the host name. */
/* The link URL should start with the page URL. */
if (strstr(link_url, page_url) != link_url)
return;
int skip_len = strlen(page_url);
if (page_url[skip_len-1] != '/') {
if (page_url[skip_len] != '/')
/* Um, I'm not sure what to do here, so give up. */
return;
skip_len++;
}
/* Move the part of the link URL after the parent page's pat to
the beginning of the link URL string, discarding what came
before it. */
memmove(link_url, link_url + skip_len, strlen(link_url) - skip_len + 1);
}

View File

@ -33,6 +33,7 @@ typedef enum {
*/
struct LinkTable {
int num;
time_t index_time;
Link **links;
};
@ -42,6 +43,8 @@ struct LinkTable {
struct Link {
/** \brief The link name in the last level of the URL */
char linkname[MAX_FILENAME_LEN + 1];
/** \brief This is for storing the unescaped path */
char linkpath[MAX_FILENAME_LEN + 1];
/** \brief The full URL of the file */
char f_url[MAX_PATH_LEN + 1];
/** \brief The type of the link */
@ -114,6 +117,7 @@ int LinkTable_disk_save(LinkTable *linktbl, const char *dirn);
/**
* \brief load a link table from the disk.
* \param[in] dirn We expected the unescaped_path here!
*/
LinkTable *LinkTable_disk_open(const char *dirn);

View File

@ -15,7 +15,11 @@ int log_level_init()
if (env) {
return atoi(env);
}
#ifdef DEBUG
return DEFAULT_LOG_LEVEL | debug;
#else
return DEFAULT_LOG_LEVEL;
#endif
}
void
@ -36,7 +40,11 @@ log_printf(LogType type, const char *file, const char *func, int line,
case info:
goto print_actual_message;
default:
fprintf(stderr, "Debug(%x):", type);
fprintf(stderr, "Debug");
if (type != debug) {
fprintf(stderr, "(%x)", type);
}
fprintf(stderr, ":");
break;
}

View File

@ -39,10 +39,11 @@ void log_printf(LogType type, const char *file, const char *func, int line,
* \details This macro automatically prints out the filename and line number
*/
#define lprintf(type, ...) \
log_printf(type, __FILE__, __func__, __LINE__, __VA_ARGS__);
#endif
log_printf(type, __FILE__, __func__, __LINE__, __VA_ARGS__)
/**
* \brief Print the version information for HTTPDirFS
*/
void print_version();
#endif

View File

@ -201,6 +201,7 @@ parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
{ "single-file-mode", required_argument, NULL, 'L' }, /* 22 */
{ "cacert", required_argument, NULL, 'L' }, /* 23 */
{ "proxy-cacert", required_argument, NULL, 'L' }, /* 24 */
{ "refresh-timeout", required_argument, NULL, 'L' }, /* 25 */
{ 0, 0, 0, 0 }
};
while ((c =
@ -219,11 +220,12 @@ parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
*/
return 1;
case 'V':
print_version(argv[0], 1);
print_version();
add_arg(fuse_argv, fuse_argc, "-V");
return 1;
case 'd':
add_arg(fuse_argv, fuse_argc, "-d");
CONFIG.log_type |= debug;
break;
case 'f':
add_arg(fuse_argv, fuse_argc, "-f");
@ -304,6 +306,9 @@ parse_arg_list(int argc, char **argv, char ***fuse_argv, int *fuse_argc)
case 24:
CONFIG.proxy_cafile = strdup(optarg);
break;
case 25:
CONFIG.refresh_timeout = atoi(optarg);
break;
default:
fprintf(stderr, "see httpdirfs -h for usage\n");
return 1;
@ -370,10 +375,12 @@ HTTPDirFS options:\n\
to 1TB in size using the default segment size.\n\
--max-conns Set maximum number of network connections that\n\
libcurl is allowed to make. (default: 10)\n\
--refresh-timeout The directories are refreshed after the specified\n\
time, in seconds (default: 3600)\n\
--retry-wait Set delay in seconds before retrying an HTTP request\n\
after encountering an error. (default: 5)\n\
--user-agent Set user agent string (default: \"HTTPDirFS\")\n\
--no-range-check Disable the build-in check for the server's support\n\
--no-range-check Disable the built-in check for the server's support\n\
for HTTP range requests\n\
--insecure-tls Disable licurl TLS certificate verification by\n\
setting CURLOPT_SSL_VERIFYHOST to 0\n\

View File

@ -24,7 +24,7 @@ typedef struct {
static SonicConfigStruct SONIC_CONFIG;
/**
* \brief initalise Sonic configuration struct
* \brief initialise Sonic configuration struct
*/
void
sonic_config_init(const char *server, const char *username,
@ -319,27 +319,28 @@ XML_parser_general(void *data, const char *elem, const char **attr)
LinkTable_add(linktbl, link);
}
static void sanitise_LinkTable(LinkTable *linktbl) {
static void sanitise_LinkTable(LinkTable *linktbl)
{
for (int i = 0; i < linktbl->num; i++) {
if (!strcmp(linktbl->links[i]->linkname, ".")) {
if (!strcmp(linktbl->links[i]->linkname, ".")) {
/* Note the super long sanitised name to avoid collision */
strcpy(linktbl->links[i]->linkname, "__DOT__");
}
strcpy(linktbl->links[i]->linkname, "__DOT__");
}
if (!strcmp(linktbl->links[i]->linkname, "/")) {
if (!strcmp(linktbl->links[i]->linkname, "/")) {
/* Ditto */
strcpy(linktbl->links[i]->linkname, "__FORWARD-SLASH__");
}
strcpy(linktbl->links[i]->linkname, "__FORWARD-SLASH__");
}
for (size_t j = 0; j < strlen(linktbl->links[i]->linkname); j++) {
if (linktbl->links[i]->linkname[j] == '/') {
linktbl->links[i]->linkname[j] = '-';
}
}
for (size_t j = 0; j < strlen(linktbl->links[i]->linkname); j++) {
if (linktbl->links[i]->linkname[j] == '/') {
linktbl->links[i]->linkname[j] = '-';
}
}
if (linktbl->links[i]->next_table != NULL) {
sanitise_LinkTable(linktbl->links[i]->next_table);
}
if (linktbl->links[i]->next_table != NULL) {
sanitise_LinkTable(linktbl->links[i]->next_table);
}
}
}

View File

@ -8,7 +8,7 @@
typedef struct {
/**
* \brief Sonic id field
* \details This is used to store the followings:
* \details This is used to store the following:
* - Arist ID
* - Album ID
* - Song ID