The graph-tool build consumes too much memory

Hi. I have a device with an Apple M1 chip and I'm trying to build a graph-tool deb package in a debian docker container on an arm64 architecture. I am using the ( image as the base of the docker file. I understand the dpkg-buildpackage command and the cpp files are being compiled. This process consumes very much memory (about 10 GB). My device doesn't have that much memory and the build crashes. Can we expect there will be a ready version for the arm64 architecture in the near future or can you suggest steps to optimize the build?

Change the variable NJOBS=4 to 1. This will reduce the memory usage (but
it will increase the build time).

Yes, I use NJOBS=1
Below is the Dockerfile I am using:

ARG BASE=debian:buster-slim

FROM $BASE as builder


ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Berlin

RUN apt-get update
RUN apt-get -y dist-upgrade
RUN apt-get -y install git dpkg-dev dh-make autotools-dev autoconf python3-dev python3-scipy libboost-dev libboost-graph-dev libboost-iostreams-dev libboost-python-dev libboost-context-dev libboost-coroutine-dev libboost-regex-dev libcgal-dev python3-cairo-dev libsparsehash-dev libcairomm-1.0-dev libffi-dev libexpat1-dev cdbs devscripts
RUN git clone python3-graph-tool
WORKDIR python3-graph-tool

RUN git checkout release-2.37
ADD debian debian
RUN /bin/bash > debian/changelog


RUN if [ "$DEB_VERSION" -gt "1" ]; then gt_version=`git describe --tags | grep -o 'release-[^-]*' | sed s/release-//`; dch -v ${gt_version}-${DEB_VERSION} -m 'New package release'; dch -r --no-force-save-on-release -m 'New package release'; fi
RUN head debian/changelog

RUN ./


RUN NJOBS=$NJOBS dpkg-buildpackage -us -uc -j$NJOBS
RUN mkdir build
RUN mv python3-graph-tool*.deb build/


ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Berlin

RUN apt-get update
RUN apt-get -y dist-upgrade
RUN apt-get -y install gdebi-core
RUN apt-get -y install python3-matplotlib gir1.2-gtk-3.0 python3-cairo

RUN mkdir build
COPY --from=builder build/* build/
RUN gdebi -n build/python3-graph-tool_*.deb
RUN python3 -c "from graph_tool.all import *; show_config(); g = random_graph(10, lambda: 5, directed=False); graph_draw(g, output='foo.png')"

I never needed 10GB to compile it with only one job... But in any case,
I'm afraid there is not much that can be done.

In my experience memory usage peaked at around 7GB for a single process,
though this was an x86_64 build.

Some techniques to reduce compiler memory consumption have already been
applied in recent times and I suspect the low-hanging fruit may be gone.
Given that "maximum memory consumption" is the problem, it may be worth
identifying which specific translation units (object files) are the largest
and see if they can be broken up. Typically there are 2-4 big templated
functions in each.


attachment.html (1.52 KB)