Fossil

Diff
Login

Diff

Differences From Artifact [8a4d2dec0a]:

To Artifact [032b8dae68]:


67
68
69
70
71
72
73
74

75
76
77
78
79
80
81
67
68
69
70
71
72
73

74
75
76
77
78
79
80
81







-
+







least two right ways to do it.

The wrong way is to use the `Dockerfile COPY` command, because by baking
the repo into the image at build time, it will become one of the image’s
base layers. The end result is that each time you build a container from
that image, the repo will be reset to its build-time state. Worse,
restarting the container will do the same thing, since the base image
layers are immutable in Docker. This is almost certainly not what you
layers are immutable. This is almost certainly not what you
want.

The correct ways put the repo into the _container_ created from the
_image_, not in the image itself.


### <a id="repo-inside"></a> 2.1 Storing the Repo Inside the Container
112
113
114
115
116
117
118
119

120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139


140
141
142
143
144
145
146
112
113
114
115
116
117
118

119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137


138
139
140
141
142
143
144
145
146







-
+


















-
-
+
+







privileges after it enters the chroot. (See [below](#args) for how to
change this default.) You don’t have to restart the server after fixing
this with `chmod`: simply reload the browser, and Fossil will try again.


### 2.2 <a id="bind-mount"></a>Storing the Repo Outside the Container

The simple storage method above has a problem: Docker containers are
The simple storage method above has a problem: containers are
designed to be killed off at the slightest cause, rebuilt, and
redeployed. If you do that with the repo inside the container, it gets
destroyed, too. The solution is to replace the “run” command above with
the following:

```
  $ docker run \
    --publish 9999:8080 \
    --name fossil-bind-mount \
    --volume ~/museum:/museum \
    fossil
```

Because this bind mount maps a host-side directory (`~/museum`) into the
container, you don’t need to `docker cp` the repo into the container at
all. It still expects to find the repository as `repo.fossil` under that
directory, but now both the host and the container can see that repo DB.

Instead of a bind mount, you could instead set up a separate [Docker
volume](https://docs.docker.com/storage/volumes/), at which point you
Instead of a bind mount, you could instead set up a separate
[volume](https://docs.docker.com/storage/volumes/), at which point you
_would_ need to `docker cp` the repo file into the container.

Either way, files in these mounted directories have a lifetime
independent of the container(s) they’re mounted into. When you need to
rebuild the container or its underlying image — such as to upgrade to a
newer version of Fossil — the external directory remains behind and gets
remapped into the new container when you recreate it with `--volume/-v`.
181
182
183
184
185
186
187
188
189
190



191
192
193
194
195
196
197
198



199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268

269
270
271
272
273
274
275
276

277
278
279
280
281

282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303

304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329

330
331
332
333
334
335
336
181
182
183
184
185
186
187



188
189
190








191
192
193
































































194





195








196





197
















198





199

200





201


















202
203
204
205
206
207
208
209







-
-
-
+
+
+
-
-
-
-
-
-
-
-
+
+
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-

-
-
-
-
-
+
-
-
-
-
-
-
-
-
+
-
-
-
-
-
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-

-
-
-
-
-
+
-

-
-
-
-
-

-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+







[wal]:    https://www.sqlite.org/wal.html


## 3. <a id="security"></a>Security

### 3.1 <a id="chroot"></a>Why Not Chroot?

Prior to 2023.03.26, the stock Fossil container made use of [the chroot
jail feature](./chroot.md) in order to wall away the shell and other
tools provided by [BusyBox](https://www.busybox.net/BusyBox.html).  This
Prior to 2023.03.26, the stock Fossil container relied on [the chroot
jail feature](./chroot.md) to wall away the shell and other tools
provided by [BusyBox]. It included that as a bare-bones operating system
author made a living for years in the early 1990s using Unix systems
that offered less power, so there was a legitimate worry that if someone
ever figured out how to get a shell on one of these Fossil containers,
it would constitute a powerful island from which to attack the rest of
the network.

The thing is, Fossil is self-contained, needing none of that power in
the main-line use cases.  The only reason we included BusyBox in the
inside the container on the off chance that someone might need it for
debugging, but the thing is, Fossil is self-contained, needing none of
that power in the main-line use cases.
container at all was on the off chance that someone needed it for
debugging.

That justification collapsed when we realized you could restore this
basic shell environment on an as-needed basis with a one-line change to
the `Dockerfile`, as we show in the next section.


### 3.2 <a id="run"></a>Swapping Out the Run Layer

If you want a basic shell environment for temporary debugging of the
running container, that’s easily added. Simply change this line in the
`Dockerfile`…

      FROM scratch AS run

…to this:

      FROM busybox AS run

Rebuild, redeploy, and your Fossil container now has a BusyBox based
shell environment that you can get into via:

      $ docker exec -it -u fossil $(make container-version) sh

(That command assumes you built the container via “`make container`” and
are therefore using its versioning scheme.)

Another case where you might need to replace this bare-bones “`run`”
layer with something more functional is that you’re setting up [email
alerts](./alerts.md) and need some way to integrate with the host’s
[MTA]. There are a number of alternatives in that linked document, so
for the sake of discussion, we’ll say you’ve chosen [Method
2](./alerts.md#db), which requires a Tcl interpreter and its SQLite
extension to push messages into the outbound email queue DB, presumably
bind-mounted into the container.

You can do that by replacing STAGEs 2 and 3 in the stock `Dockerfile`
with this:

```
    ## ---------------------------------------------------------------------
    ## STAGE 2: Pare that back to the bare essentials, plus Tcl.
    ## ---------------------------------------------------------------------
    FROM alpine AS run
    ARG UID=499
    ENV PATH "/sbin:/usr/sbin:/bin:/usr/bin"
    COPY --from=builder /tmp/fossil /bin/
    COPY tools/email-sender.tcl /bin/
    RUN set -x                                                              \
        && echo "fossil:x:${UID}:${UID}:User:/museum:/false" >> /etc/passwd \
        && echo "fossil:x:${UID}:fossil"                     >> /etc/group  \
        && install -d -m 700 -o fossil -g fossil log museum                 \
        && apk add --no-cache tcl sqlite-tcl
```

Build it and test that it works like so:

```
    $ make container-run &&
      echo 'puts [info patchlevel]' |
      docker exec -i $(make container-version) tclsh 
    8.6.12
```

You should remove the `PATH` override in the “RUN”
stage, since it’s written for the case where everything is in `/bin`.
With these additions, we need the longer `PATH` shown above to have
ready access to them all.

Our weak “you might need it” justification collapsed when we realized
Another useful case to consider is that you’ve installed a [server
extension](./serverext.wiki) and you need an interpreter for that
script. The first option above won’t work except in the unlikely case that
it’s written for one of the bare-bones script interpreters that BusyBox
ships.(^BusyBox’s `/bin/sh` is based on the old 4.4BSD Lite Almquist
shell, implementing little more than what POSIX specified in 1989, plus
equally stripped-down versions of `awk` and `sed`.)

you could restore this basic shell environment with a one-line change to
Let’s say the extension is written in Python. While you could handle it
the same way we do with the Tcl example above, Python is more
popular, giving us more options. Let’s inject a Python environment into
the stock Fossil container via a suitable “[distroless]” image instead:

the `Dockerfile`, as shown [below](#run).
```
    ## ---------------------------------------------------------------------
    ## STAGE 2: Pare that back to the bare essentials, plus Python.
    ## ---------------------------------------------------------------------
    FROM cgr.dev/chainguard/python:latest
    USER root
    ARG UID=499
    ENV PATH "/sbin:/usr/sbin:/bin:/usr/bin"
    COPY --from=builder /tmp/fossil /bin/
    COPY --from=builder /bin/busybox.static /bin/busybox
    RUN [ "/bin/busybox", "--install", "/bin" ]
    RUN set -x                                                              \
        && echo "fossil:x:${UID}:${UID}:User:/museum:/false" >> /etc/passwd \
        && echo "fossil:x:${UID}:fossil"                     >> /etc/group  \
        && install -d -m 700 -o fossil -g fossil log museum
```

You will also have to add `busybox-static` to the APK package list in
STAGE 1 for the `RUN` script at the end of that stage to work, since the
[Chainguard Python image][cgimgs] lacks a shell, on purpose. The need to
install root-level binaries is why we change `USER` temporarily here.

[BusyBox]: https://www.busybox.net/BusyBox.html
Build it and test that it works like so:

```
    $ make container-run &&
      docker exec -i $(make container-version) python --version 
    3.11.2
```

The compensation for the hassle of using Chainguard over something more
general purpose like Alpine + “`apk add python`”
is huge: we no longer leave a package manager sitting around inside the
container, waiting for some malefactor to figure out how to abuse it.

Beware that there’s a limit to this über-jail’s ability to save you when
you go and provide a more capable OS layer like this. The container
layer should stop an attacker from accessing any files out on the host
that you haven’t explicitly mounted into the container’s namespace, but
it can’t stop them from making outbound network connections or modifying
the repo DB inside the container.

[cgimgs]:     https://github.com/chainguard-images/images/tree/main/images
[distroless]: https://www.chainguard.dev/unchained/minimal-container-images-towards-a-more-secure-future
[MTA]:        https://en.wikipedia.org/wiki/Message_transfer_agent


### 3.3 <a id="caps"></a>Dropping Unnecessary Capabilities
### 3.2 <a id="caps"></a>Dropping Unnecessary Capabilities

The example commands above create the container with [a default set of
Linux kernel capabilities][defcap]. Although Docker strips away almost
all of the traditional root capabilities by default, and Fossil doesn’t
need any of those it does take away, Docker does leave some enabled that
Fossil doesn’t actually need. You can tighten the scope of capabilities
by adding “`--cap-drop`” options to your container creation commands.
441
442
443
444
445
446
447
448
449


450
451
452
453
454
455
456
314
315
316
317
318
319
320


321
322
323
324
325
326
327
328
329







-
-
+
+







[capchg]:     https://stackoverflow.com/a/45752205/142454



## 4. <a id="static"></a>Extracting a Static Binary

Our 2-stage build process uses Alpine Linux only as a build host. Once
we’ve got everything reduced to the two key static binaries — Fossil and
BusyBox — we throw all the rest of it away.
we’ve got everything reduced to a single static Fossil binary,
we throw all the rest of it away.

A secondary benefit falls out of this process for free: it’s arguably
the easiest way to build a purely static Fossil binary for Linux. Most
modern Linux distros make this [surprisingly difficult][lsl], but Alpine’s
back-to-basics nature makes static builds work the way they used to,
back in the day. If that’s all you’re after, you can do so as easily as
this:
464
465
466
467
468
469
470
471

472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491

492
493
494
495
496

497
498
499
500
501

502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522

523
524
525
526
527
528
529
337
338
339
340
341
342
343

344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363

364
365
366
367
368

369
370
371
372
373

374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394

395
396
397
398
399
400
401
402







-
+



















-
+




-
+




-
+




















-
+








The resulting binary is the single largest file inside that container,
at about 6 MiB. (It’s built stripped.)

[lsl]: https://stackoverflow.com/questions/3430400/linux-static-linking-is-dead


## 5. <a id="args"></a>Container Build Arguments
## 5. <a id="custom" name="args"></a>Customization Points

### <a id="pkg-vers"></a> 5.1 Fossil Version

The default version of Fossil fetched in the build is the version in the
checkout directory at the time you run it.  You could override it to get
a release build like so:

```
  $ docker build -t fossil --build-arg FSLVER=version-2.20 .
```

Or equivalently, using Fossil’s `Makefile` convenience target:

```
  $ make container-image DBFLAGS='--build-arg FSLVER=version-2.20'
```

While you could instead use the generic
“`release`” tag here, it’s better to use a specific version number
since Docker caches downloaded files and tries to
since container builders cache downloaded files, hoping to
reuse them across builds. If you ask for “`release`” before a new
version is tagged and then immediately after, you might expect to get
two different tarballs, but because the underlying source tarball URL
remains the same when you do that, you’ll end up reusing the
old tarball from your Docker cache. This will occur
old tarball from cache. This will occur
even if you pass the “`docker build --no-cache`” option.

This is why we default to pulling the Fossil tarball by checkin ID
rather than let it default to the generic “`trunk`” tag: so the URL will
change each time you update your Fossil source tree, forcing Docker to
change each time you update your Fossil source tree, forcing the builder to
pull a fresh tarball.


### 5.2 <a id="uids"></a>User & Group IDs

The “`fossil`” user and group IDs inside the container default to 499.
Why? Regular user IDs start at 500 or 1000 on most Unix type systems,
leaving those below it for system users like this Fossil daemon owner.
Since it’s typical for these to start at 0 and go upward, we started at
500 and went *down* one instead to reduce the chance of a conflict to as
close to zero as we can manage.

To change it to something else, say:

```
  $ make container-image \
    DBFLAGS='--build-arg UID=501'
```

This is particularly useful if you’re putting your repository on a
Docker volume since the IDs “leak” out into the host environment via
separate volume since the IDs “leak” out into the host environment via
file permissions. You may therefore wish them to mean something on both
sides of the container barrier rather than have “499” appear on the host
in “`ls -l`” output.


### 5.3 <a id="config"></a>Fossil Configuration Options

555
556
557
558
559
560
561


























































































































562
563
564
565
566
567
568
569
570
571
572
573
574



575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592

593
594
595
596
597
598
599
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566



567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586

587
588
589
590
591
592
593
594







+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+










-
-
-
+
+
+

















-
+







for now, it suffices to point out that you can switch to Podman while
using our `Makefile` convenience targets unchanged by saying:

```
    $ make CENGINE=podman container-run
```


### 5.3 <a id="run"></a>Elaborating the Run Layer

If you want a basic shell environment for temporary debugging of the
running container, that’s easily added. Simply change this line in the
`Dockerfile`…

      FROM scratch AS run

…to this:

      FROM busybox AS run

Rebuild, redeploy, and your Fossil container will have a [BusyBox]-based
shell environment that you can get into via:

      $ docker exec -it -u fossil $(make container-version) sh

(That command assumes you built it via “`make container`” and are
therefore using its versioning scheme.)

Another case where you might need to replace this bare-bones “`run`”
layer with something more functional is that you’re setting up [email
alerts](./alerts.md) and need some way to integrate with the host’s
[MTA]. There are a number of alternatives in that linked document, so
for the sake of discussion, we’ll say you’ve chosen [Method
2](./alerts.md#db), which requires a Tcl interpreter and its SQLite
extension to push messages into the outbound email queue DB, presumably
bind-mounted into the container.

You can do that by replacing STAGEs 2 and 3 in the stock `Dockerfile`
with this:

```
    ## ---------------------------------------------------------------------
    ## STAGE 2: Pare that back to the bare essentials, plus Tcl.
    ## ---------------------------------------------------------------------
    FROM alpine AS run
    ARG UID=499
    ENV PATH "/sbin:/usr/sbin:/bin:/usr/bin"
    COPY --from=builder /tmp/fossil /bin/
    COPY tools/email-sender.tcl /bin/
    RUN set -x                                                              \
        && echo "fossil:x:${UID}:${UID}:User:/museum:/false" >> /etc/passwd \
        && echo "fossil:x:${UID}:fossil"                     >> /etc/group  \
        && install -d -m 700 -o fossil -g fossil log museum                 \
        && apk add --no-cache tcl sqlite-tcl
```

Build it and test that it works like so:

```
    $ make container-run &&
      echo 'puts [info patchlevel]' |
      docker exec -i $(make container-version) tclsh 
    8.6.12
```

You should remove the `PATH` override in the “RUN”
stage, since it’s written for the case where everything is in `/bin`.
With these additions, we need the longer `PATH` shown above to have
ready access to them all.

Another useful case to consider is that you’ve installed a [server
extension](./serverext.wiki) and you need an interpreter for that
script. The first option above won’t work except in the unlikely case that
it’s written for one of the bare-bones script interpreters that BusyBox
ships.(^[BusyBox]’s `/bin/sh` is based on the old 4.4BSD Lite Almquist
shell, implementing little more than what POSIX specified in 1989, plus
equally stripped-down versions of `awk` and `sed`.)

Let’s say the extension is written in Python. While you could handle it
the same way we do with the Tcl example above, Python is more
popular, giving us more options. Let’s inject a Python environment into
the stock Fossil container via a suitable “[distroless]” image instead:

```
    ## ---------------------------------------------------------------------
    ## STAGE 2: Pare that back to the bare essentials, plus Python.
    ## ---------------------------------------------------------------------
    FROM cgr.dev/chainguard/python:latest
    USER root
    ARG UID=499
    ENV PATH "/sbin:/usr/sbin:/bin:/usr/bin"
    COPY --from=builder /tmp/fossil /bin/
    COPY --from=builder /bin/busybox.static /bin/busybox
    RUN [ "/bin/busybox", "--install", "/bin" ]
    RUN set -x                                                              \
        && echo "fossil:x:${UID}:${UID}:User:/museum:/false" >> /etc/passwd \
        && echo "fossil:x:${UID}:fossil"                     >> /etc/group  \
        && install -d -m 700 -o fossil -g fossil log museum
```

You will also have to add `busybox-static` to the APK package list in
STAGE 1 for the `RUN` script at the end of that stage to work, since the
[Chainguard Python image][cgimgs] lacks a shell, on purpose. The need to
install root-level binaries is why we change `USER` temporarily here.

Build it and test that it works like so:

```
    $ make container-run &&
      docker exec -i $(make container-version) python --version 
    3.11.2
```

The compensation for the hassle of using Chainguard over something more
general purpose like Alpine + “`apk add python`”
is huge: we no longer leave a package manager sitting around inside the
container, waiting for some malefactor to figure out how to abuse it.

Beware that there’s a limit to this über-jail’s ability to save you when
you go and provide a more capable OS layer like this. The container
layer should stop an attacker from accessing any files out on the host
that you haven’t explicitly mounted into the container’s namespace, but
it can’t stop them from making outbound network connections or modifying
the repo DB inside the container.

[cgimgs]:     https://github.com/chainguard-images/images/tree/main/images
[distroless]: https://www.chainguard.dev/unchained/minimal-container-images-towards-a-more-secure-future
[MTA]:        https://en.wikipedia.org/wiki/Message_transfer_agent


## 6. <a id="light"></a>Lightweight Alternatives to Docker

Those afflicted with sticker shock at seeing the size of a [Docker
Desktop][DD] installation — 1.65 GB here — might’ve immediately
“noped” out of the whole concept of containers. The first thing to
realize is that when it comes to actually serving simple containers like
the ones shown above is that [Docker Engine][DE] suffices, at about a
quarter of the size.

Yet on a small server — say, a $4/month 10 GiB Digital Ocean droplet —
that’s still a big chunk of your storage budget. It takes 100:1 overhead
just to run a 4 MiB Fossil server container? Once again, I wouldn’t
Yet on a small server — say, a $4/month ten gig Digital Ocean droplet —
that’s still a big chunk of your storage budget. It takes ~60:1 overhead
merely to run a Fossil server container? Once again, I wouldn’t
blame you if you noped right on out of here, but if you will be patient,
you will find that there are ways to run Fossil inside a container even
on entry-level cloud VPSes. These are well-suited to running Fossil; you
don’t have to resort to [raw Fossil service][srv] to succeed,
leaving the benefits of containerization to those with bigger budgets.

For the sake of simple examples in this section, we’ll assume you’re
integrating Fossil into a larger web site, such as with our [Debian +
nginx + TLS][DNT] plan. This is why all of the examples below create
the container with this option:

```
  --publish 127.0.0.1:9999:8080
```

The assumption is that there’s a reverse proxy running somewhere that
redirects public web hits to localhost port 9999, which in turn goes to
port 8080 inside the container.  This use of Docker/Podman port
port 8080 inside the container.  This use of port
publishing effectively replaces the use of the
“`fossil server --localhost`” option.

For the nginx case, you need to add `--scgi` to these commands, and you
might also need to specify `--baseurl`.

Containers are a fine addition to such a scheme as they isolate the
614
615
616
617
618
619
620
621

622
623
624
625

626
627


628
629
630
631
632
633
634
609
610
611
612
613
614
615

616
617
618
619
620
621


622
623
624
625
626
627
628
629
630







-
+




+
-
-
+
+







[DNT]: ./server/debian/nginx.md
[srv]: ./server/


### 6.1 <a id="nerdctl" name="containerd"></a>Stripping Docker Engine Down

The core of Docker Engine is its [`containerd`][ctrd] daemon and the
[`runc`][runc] container runner. Add to this the out-of-core CLI program
[`runc`][runc] container runtime. Add to this the out-of-core CLI program
[`nerdctl`][nerdctl] and you have enough of the engine to run Fossil
containers. The big things you’re missing are:

*   **BuildKit**: The container build engine, which doesn’t matter if
    you’re building elsewhere and shipping the images to the target.
    you’re building elsewhere and using a container registry as an
    intermediary between that build host and the deployment host.
    A good example is using a container registry as an
    intermediary between the build and deployment hosts.

*   **SwarmKit**: A powerful yet simple orchestrator for Docker that you
    probably aren’t using with Fossil anyway.

In exchange, you get a runtime that’s about half the size of Docker
Engine. The commands are essentially the same as above, but you say
“`nerdctl`” instead of “`docker`”. You might alias one to the other,
786
787
788
789
790
791
792
793

794
795
796
797
798
799
800
782
783
784
785
786
787
788

789
790
791
792
793
794
795
796







-
+







    [documented][srv].

*   The path in the host-side part of the `Bind` value must point at the
    directory containing the `repo.fossil` file referenced in said
    command so that `/museum/repo.fossil` refers to your repo out
    on the host for the reasons given [above](#bind-mount).

That being done, we also need a generic systemd unit file called
That being done, we also need a generic `systemd` unit file called
`/etc/systemd/system/fossil@.service`, containing:

----

```
[Unit]
Description=Fossil %i Repo Service
839
840
841
842
843
844
845
846

847
848
849
850
851
852
853
835
836
837
838
839
840
841

842
843
844
845
846
847
848
849







-
+







    --user admin             \
    museum/repo.fossil
```

You would also need to un-drop the `CAP_NET_BIND_SERVICE` capability
to allow Fossil to bind to this low-numbered port.

We use systemd’s template file feature to allow multiple Fossil
We use the `systemd` template file feature to allow multiple Fossil
servers running on a single machine, each on a different TCP port,
as when proxying them out as subdirectories of a larger site.
To add another project, you must first clone the base “machine” layer:

```
  $ sudo machinectl clone myproject otherthing
```
994
995
996
997
998
999
1000
1001

1002
1003
1004
1005
1006
1007
1008
990
991
992
993
994
995
996

997
998
999
1000
1001
1002
1003
1004







-
+







assumptions baked into it.  We papered over these problems above,
but if you’re using these tools for other purposes on the machine
you’re serving Fossil from, you may need to know which assumptions
our container violates and the resulting consequences.

Some of it we discussed above already, but there’s one big class of
problems we haven’t covered yet. It stems from the fact that our stock
container starts a single static executable inside a barebones container
container starts a single static executable inside a bare-bones container
rather than “boot” an OS image. That causes a bunch of commands to fail:

*   **`machinectl poweroff`** will fail because the container
    isn’t running dbus.

*   **`machinectl start`** will try to find an `/sbin/init`
    program in the rootfs, which we haven’t got.  We could