bdcs: Tools for managing a content store of software packages

This is a package candidate release! Here you can preview how this package release will appear once published to the main package index (which can be accomplished via the 'maintain' link below). Please note that once a package has been published to the main package index it cannot be undone! Please consult the package uploading documentation for more information.

[maintain]

This module provides a library and various tools for managing a content store and metadata database. These store the contents of software packages that make up a Linux distribution as well as a lot of metadata about those software packages. Tools are inclued to construct those stores from pre-built software and to pull files back out to turn into bootable images.


[Skip to ReadMe]

Properties

Versions0.1.0, 0.1.1, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4, 0.3.0, 0.4.0, 0.4.0, 0.5.0, 0.6.0, 0.6.1
Change logChangeLog.md
Dependenciesaeson (>=1.0.0.0 && <1.4.0.0), aeson-pretty, base (>=4.9 && <5.0), bdcs, bytestring (==0.10.*), codec-rpm (>=0.2.1 && <0.3), cond (>=0.4.1.1 && <0.5.0.0), conduit (>=1.2.8 && <1.3), conduit-combinators (>=1.1.0 && <1.2), conduit-extra (>=1.1.0 && <1.3), containers (>=0.5.7.1 && <0.6), content-store (>=0.2.1 && <0.3.0), cpio-conduit (>=0.7.0 && <0.8.0), cryptonite (>=0.22 && <0.30), directory (>=1.3.0.0 && <1.4.0.0), esqueleto (>=2.5.3 && <2.6.0), exceptions (>=0.8.0 && <0.11.0), filepath (>=1.4.1.1 && <1.5.0.0), gi-gio (>=2.0.14 && <2.1.0), gi-glib (>=2.0.14 && <2.1.0), gi-ostree (>=1.0.3 && <1.1.0), gitrev (>=1.3.1 && <1.4.0), http-conduit (>=2.2.3 && <2.3.0), listsafe (>=0.1.0.1 && <0.2.0), memory (>=0.14.3 && <0.15.0), monad-control (>=1.0.1.0 && <1.1.0.0), monad-logger (>=0.3.20.2 && <0.3.28.2), monad-loops (>=0.4.0 && <0.5), mtl (>=2.2.1 && <2.3), network-uri (>=2.6.0 && <2.7.0), parsec (>=3.1.10 && <3.2.0), parsec-numbers (>=0.1.0 && <0.2.0), persistent (>=2.7.0 && <2.8.0), persistent-sqlite (>=2.6.0 && <2.7.0), persistent-template (>=2.5.0 && <2.6.0), process (>=1.4.3.0 && <2.0), regex-pcre (==0.94.*), resourcet (>=1.1.9 && <1.2), split (>=0.2.3 && <0.3), tar (==0.5.*), tar-conduit (>=0.1.0 && <0.2.0), temporary (>=1.2.0.4 && <1.3.0.0), text (>=1.2.2.0 && <1.3), time (>=1.6.0.1 && <2.0), unix (>=2.7.2.1 && <2.8.0.0), unordered-containers (>=0.2.7.2 && <0.2.10.0), xml-conduit (>=1.4.0.4 && <1.8.0) [details]
LicenseLGPL-2.1-only
AuthorChris Lumens
Maintainerclumens@redhat.com
CategoryDistribution
Home pagehttps://github.com/weldr/bdcs
Source repositoryhead: git clone https://github.com/weldr/bdcs
Executablesbdcs-depsolve, bdcs-tmpfiles, bdcs-export, inspect-nevras, inspect-ls, inspect-groups, bdcs-inspect, bdcs-import, bdcs
UploadedFri Apr 13 14:01:03 UTC 2018 by clumens

Modules

Flags

NameDescriptionDefaultType
scripts

Enable importing package scripts to the database

DisabledAutomatic

Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info

Downloads

Maintainers' corner

For package maintainers and hackage trustees


Readme for bdcs-0.4.0

[back to package description]

Build Status Coverage Status

This code generates a metadata database (mddb) given an input directory of RPMs. You can generate this either by running locally or running under docker. It's really best if you have the RPMs stored locally, too, not under some NFS mount or other network storage. That can slow things down quite a bit.

Importing the same set of RPMs into the same database twice should result in no changes. Importing additional RPMs into the same database should result in those RPMs being added to the existing database. There is currently no provision for removing an imported RPM. In this way, you could import a very large set of packages piecemeal if needed.

Running locally

You will first need a directory full of RPMs somewhere. Here, I assume that is the $PWD/Packages directory. Then run:

$ cabal sandbox init
$ cabal install --dependencies-only --enable-tests
$ cabal build
$ sqlite3 metadata.db < schema.sql
$ for f in ${PWD}/Packages/*rpm; do dist/build/bdcs-import/bdcs-import metadata.db cs.repo file://${f}; done

Running with docker

Running with docker is a two step process, as indicated by Dockerfile.build and Dockerfile. Dockerfile.build is used to compile the program needed to build an mddb and produces an image with that program. Dockerfile then runs that image and produces the mddb.

The Dockerfile depends on a base image, named welder/fedora:latest, which needs have been previously built. If it is not available it can be built from the welder-deployment repository by running make weld-fedora.

The Makefile lays out the exact steps and can be used to simplify all this - just run make importer mddb. If make is unavailable, just copy the steps out of there and run them manually.

The Makefile expects that the RPMs are in $PWD/rpms.

After completion, the mddb and content store will be in a bdcs-mddb-volume docker volume.

Preparing local development environment for Haskell

For development we use the latest upstream versions:

  1. Remove the standard haskell-platform and ghc-* RPMs if you have them installed
  2. Download version 8.0.2 of the generic Haskell Platform distribution from https://www.haskell.org/platform/linux.html#linux-generic
$ tar -xzvf haskell-platform-8.0.2-unknown-posix--minimal-x86_64.tar.gz
$ sudo ./install-haskell-platform.sh
  1. Add /usr/local/bin to your PATH if not already there!
  2. Install build dependencies:
# dnf -y install xz-devel zlib-devel glib2-devel gobject-introspection-devel ostree-devel

NOTE: On RHEL 7 ostree-devel is part of the Atomic Host product!

Building the project locally

cabal is used to install and manage Haskell dependencies from upstream.

$ cd src/ && cabal sandbox init && cabal install

Executing unit tests

$ cabal sandbox init
$ cabal install --dependencies-only --enable-tests
$ cabal test
Running 1 test suites...
Test suite tests: RUNNING...
Test suite tests: PASS
Test suite logged to: dist/test/db-0.1.0.0-test-db.log
1 of 1 test suites (1 of 1 test cases) passed.

Produce code coverage report

$ cabal sandbox init
$ cabal install --enable-tests --enable-coverage
$ cabal test
$ firefox ./dist/hpc/vanilla/tix/*/hpc_index.html