Skip to content

This is pretty simple when comes to compiling Python on any *nix OS. There must compilation tools to be first installed before continuing onto compiling a opensource software such as the Python programming language.

on Ubuntu types of Distros:
we must install the following tools on the command prompt.

>>> sudo apt-get install build-essential

on CentOS like OSes:
We can do the following to obtain development tools

>>> sudo yum groupinstall 'Development Tools'

Finally, we can then roll over to python website to download the latest release candidate.

Download either the Gzipped source tarball or XZ compressed source tarball. If you have downloaded the GZ source code package. We can do the following to extract code into a directory and compile like a boss.

>>> tar xzvf Python-3.9.0rc2.tar.gz
>>> cd Python-3.9.0rc2/

Missing _lzma ? No problem we can first install xz and lzma using macports then configure the CFLAGS and LDFLAGS ENV variable to enable lzma header info unto bash.

>>> sudo port install xz lzma
>>> export LDFLAGS="-L/opt/local/lib"
>>> export CFLAGS="-I/opt/local/include"
>>> ./configure --enable-optimizations
>>> make
>>> make test
>>> make install

One of my previous post when installing python was to use the XML file which was by far the best approach in installing Python v3.8.x silently. After some experimenting using XML, it stopped functioning after i've upgraded to windows 10 v2004. after some thoughts, i then decided to use their other method to install this software unto windows.

you must first download either the python v3.8.5 64 bit installer software or their python v3.8.5 32 bit installer software from

afterwards, fire up your DOS prompt/terminal and run the terminal as administrator instead of a regular user to gain access to install this python software unto the C-Drive.

enter a command to change your directory to your download folder where the python installer executable has been placed.

C:\users\UserName\Download> python-3.8.5-amd64.exe /quiet InstallAllUsers=1 TargetDir=c:\Python38 AssociateFiles=1 CompileAll=1 PrependPath=0 Shortcuts=0 Include_doc=1 Include_debug=0 Include_dev=0 Include_exe=1 Include_launcher=1 InstallLauncherAllUsers=1 Include_lib=1 Include_pip=1 Include_symbol=0 Include_tcltk=1 Include_test=1 Include_tools=1

Now, wait 10 to 15 seconds before entering command onto the command prompt. There is still one additional task we must do before we could call this a successful install due to pip package manager. In the next block, we will show you how we can simply install pip unto your windows 10.

Ok, in order to install pip, we first must upgrade pip to the latest version since if you plan to use it immediately after installing Python by using the method from above , it will complaint to you that you have an older pip package manager.

so now, we will upgrade pip. Execute the below commands inside your ADMIN command prompt window !!

C:\users\UserName\Download> C:\Python38\python.exe -m pip install --upgrade pip

To install python packages, you will have to do the following.

C:\users\UserName\Download> C:\python38\scripts\pip.exe install requests diceware scipy

Done !

This was something , which intrigued me for the past few months. I couldnt find time to work on this docker item until today. The concept in using Docker to build your own unique cluster farm of bots was quite kool. One can make as many worker bees in a single cluster farm as long as there is DISK and RAM space available. I will provide one example on how i was able to build a simple cluster using Docker and Docker-tools.

1) Docker
2) Docker-Toolbox

In order for us to build a simple docker cluster bots, we must create a BOSS worker agent then create other worker bees in this exact order.

I am assuming you are on linux/mac since windows docker worker agents will be a whole different beast, where one has to deal with, when comes to building a docker cluster bot farm aka a Borg Collective.

1) docker-machine create --driver virtualbox boss1
2) docker-machine ip boss1
3) docker-machine ssh boss1

so now, we are inside boss1 the container, we must initialize the boss1 to become the BOSS in this docker Cluster.

1) docker swarm init --advertise-addr boss1_IP_addr
it returns something like this:

docker swarm join --token SWMTKN-1-5qualk27cjkwaplb1j7zn88me2zlxqe7owe1cn2c2s14wb2mpt-3liv5iz5sz9c8j6jtd3w5nql8

2) docker node ls
it returns something like this:

88ruvvd93cwd7cp4jt5u1tkwg * boss1 Ready Active Leader 19.03.5

So now, you can see that this container had became a cluster leader in our swarm. We must now add other worker bees to our cluster (Borg Collective). We must now fire-up another terminal tab in your OS to create additional worker containers (repeat this task below to create more borgs/bees by changing work1 to work2 or *workN).

*where workN , N is a number.

1) docker-machine create --driver virtualbox work1
2) docker-machine ip work1
3) docker-machine ssh work1

Now, we would need to paste that command from above to allow this worker1 to join the BOSS Collective.

4) docker swarm join --token SWMTKN-1-5qualk27cjkwaplb1j7zn88me2zlxqe7owe1cn2c2s14wb2mpt-3liv5iz5sz9c8j6jtd3w5nql8

Its weird how this worked so well, but continue to open a new tab and run steps 1,2,3 in creating additional worker borg agents then add the BOSS ID to each of the worker bees (depending how many you wish to make). These steps did worked ! After creating a dozen or so bots, you can install all kinds of opensource software unto them and allow the BOSS to control the worker bees/borgs aka swarm/collective.

If you do decide for your worker bees to get out of your swarm. In each of the workers, one can execute including the leader.

docker swarm leave

once outside of the swarm cluster, you can use the commands to stop the instances/containers and delete them by using the following commands.

docker-machine stop work2
docker-machine rm work2

docker-machine stop work1
docker-machine rm work1

docker-machine stop boss1
docker-machine rm boss1

Happy Dockering !

From our previous mention of using homebrew on mac. This time we will be going through how to add, delete, and clean software installed using macports.


I have briefly mentioned the use of using homebrew for adding additional software packages to MacOS from highsierra to newer OS in our previous posting. After some thought, it would also be useful to jogged my memory on how to use macports when one is on a mac device after installing this package manager utility. I recommend using either one or even both software package manager. The Fink utility is a foreign to me, i have not used this at the moment but i am certain it will be as good as macports.

Install macports:
Download their binary executable from their website for your MacOS.

Once installed, one can fire up a native mac terminal or install Hyper or even iTerm v3.x.x console. When comes to running commands, it is your preference that you should choose and be comfortable with one to two terminal on mac before executing the following commands once you have macports installed.

# to update package listings
sudo port selfupdate

# to search for packages
sudo port search PACKAGEName

# to Install packages
sudo port install PACKAGEName

# to upgrade outdated packages
port upgrade outdated

# to check current inactive packages
sudo port installed inactive

# to clean current inactive packages
sudo port uninstall inactive

# to check current active packages
sudo port installed active

# to delete package
sudo port uninstall PACKAGEName

# to check dependencies software packages
sudo port echo leaves

# to clean out dependencies
sudo port uninstall leaves

# Start / Stop
sudo port load mariadb-10.1-server
sudo port unload mariadb-10.1-server

# secure install
sudo /opt/local/lib/mariadb-10.1/bin/mysql_secure_installation

# login
/opt/local/lib/mariadb-10.1/bin/mysql -u username -p

# Change Password
/opt/local/lib/mariadb-10.1/bin/mysqladmin -u root password ''

The Simple answer is yes, one can script the docker like a Boss. Kind of struggled this semester trying to make sense what Docker is and what Docker can do for System Engineering as a whole. My struggle was to understand how does a local installation of Docker be able to connect to DockerHub. This was my one and only struggle ! After realizing that all of the files (images) aka resources were coming from this Dockerhub website, I was in an absolute euphoric state, once this had been resolved. A LED lightbulb had woken me up in a deep slumber and everything made sense.

The Key is how to write commands in order to control the delivery of these Software images into your local machine (mac, windows, linux). Once an image has been pull from DockerHub, one has the ability to customize an image. One has the ability to make an image from Dockerhub to a single instance of that image, or to as many as thousands of local instances of that one particular image. These Local Runnable images can then be copied with unique customization then turned into Containers. After customization, one would run the containers, altered its configs, add features, and finally delete them, if necessary.

For example: the following 10 lines of code can be saved into a file called: Dockerfile. Once the code has been inserted into this special file. One can run this file in your terminal and execute all the commands. Your supervisor would be impressed that you are busy ! **b/c the longer this file is, the more text will fly across your terminal like a screen saver.

# This is a Docker Script

FROM ubuntu
ENV PATH /usr/local/bin:$PATH
RUN set -eux; \
apt-get update -y; \
apt-get install -y wget; \
apt-get install -y vim; \
apt-get install -y apache2; \
apt-get install -y mariadb
ENTRYPOINT ["/usr/sbin/apache2ctl", "start"]

so.. now, you asked yourself, what do you do with the code from above ? right, in your OS, do a few things add that script and save it as Dockerfile.

mkdir $HOME/dockerimg
# copy that code from above and save it inside this $HOME/dockerimg location as "Dockerfile"
# to build this dockerscript, you can run the following command.
docker build -t NameofYourContainer:latest .

Now, you will see all kinds of text fly across your screen as if you are super busy doing something. You can also push it to your dockerhub account but one problem is that some of these images are pretty large. Unless that you have paid for a dockerhub account, generally its not allow to upload more than a a certain size of your image to dockerhub.

Recently, i've gained this simple knowledge of realizing a newly developed, opensource like programming language made available to the general public, which resembled a derivative of C/C++ with some resemblance to C#. A natural language of the robots called Beef. Once, i had cloned the github repo onto both 18.04 ubuntu on (VM/metal) along with cloning directly to a Mac running the latest version of the Xcode, all three occurrences when compiling this software failed miserably.

Fail on Linux VM ubuntu 18.04.x
collect2: fatal error: ld terminated with signal 9 [Killed]
compilation terminated.
tools/lto/CMakeFiles/LTO.dir/build.make:278: recipe for target 'lib/' failed
make[2]: *** [lib/] Error 1
make[2]: *** Deleting file 'lib/'
CMakeFiles/Makefile2:20778: recipe for target 'tools/lto/CMakeFiles/LTO.dir/all' failed
make[1]: *** [tools/lto/CMakeFiles/LTO.dir/all] Error 2
Makefile:151: recipe for target 'all' failed
make: *** [all] Error 2

Failed on MAC Catalina w/Xcode
/Users/flo/github/Beef/BeefySysLib/platform/darwin/../posix/PosixCommon.cpp:1360:1: warning: control reaches end of non-void function
1 warning generated.
[100%] Linking CXX static library ../Release/bin/libBeefRT.a
/Applications/ file: ../Release/bin/libBeefRT.a(StompAlloc.cpp.o) has no symbols
/Applications/ file: ../Release/bin/libBeefRT.a(StompAlloc.cpp.o) has no symbols
[100%] Built target BeefRT
Building BeefBuild_bootd
TIMING: Beef compiling: 60.2s
Linking BeefBuild_bootd...Undefined symbols for architecture x86_64:
  "_ffi_call", referenced from:
      bf::System::FFI::FFILIB::Call(bf::System::FFI::FFILIB::FFICIF*, void*, void*, void**) in libBeefRT_d.a(Internal.cpp.o)
  "_ffi_closure_alloc", referenced from:
      bf::System::FFI::FFILIB::ClosureAlloc(long, void**) in libBeefRT_d.a(Internal.cpp.o)
  "_ffi_prep_cif", referenced from:
      bf::System::FFI::FFILIB::PrepCif(bf::System::FFI::FFILIB::FFICIF*, bf::System::FFI::FFIABI, int, bf::System::FFI::FFIType*, bf::System::FFI::FFIType**) in libBeefRT_d.a(Internal.cpp.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
FAIL: Exit code returned: 1

Failed on hardware Ubuntu 18.04.
This was the machine, which gotten to the linking section.
cannot find -ltinfo
build.make:194: recipe for target 'Debug/bin/BeefBoot' failed
build.make:253: recipe for target 'BeefBoot/CMakeFiles/BeefBoot.dir/all' failed

Compiled it 4 times and each time had different failed messages when using MAC and Linux. If anyone has a solution to either compiling failures, let me know. reach me at aschenbach at gmail com.

A quick note about how to upload a newly created container from docker hub by adding additional software before uploading it to docker hub. I am not as familiar with this favor of linux. Alpine is different from other linux and used a different package manager to install additional software. This distro we are using in our example is called Alpine linux. On Docker Hub, it is super light weight and came with a old Shell from unix. In this example, we will pull this docker image, add the BASH , commit it and package it then finally upload this new version to your Docker hub Free Account.

Pure example only for my personal recollection/ fullfillment.
flo@box: ~ $ docker pull alpine
# docker run -t -d --name {NameOfYourDockerInstance} alpine
flo@box: ~ $ docker run -t -d --name alpine_bash alpine
flo@box: ~ $ docker images
# docker exec -it {NameOfYourDockerInstance} sh
flo@box: ~ $ docker exec -it alpine_bash sh

Now, we are inside this instance of alpine as a virtualized container. We must now add additional software to this container before exiting it. Since our example is to install our favorite shell called bash. We will do so inside this container.

the prompt for alpine looked like this:
/ #
/ # apk update
/ # apk upgrade
/ # apk install bash
/ # exit

DONE with Alpine.

Now, you will have to log into your docker hub account and create a new public repo to hold this container from your local machine. Once done, the following instructions will apply to push this local instance to Docker hub.

Docker notation stated the following:
docker commit [options] [container ID] [repository:tag]

In my Docker hub container, i've named my local container as: alpine_bash.

flo@box : ~ $ docker container ps -a # will show the following
352b8a99458f alpine "/bin/sh" 8 seconds ago Up 7 seconds alpine_bash

We want to use the container ID as a parameter then the Dockerhub PUBLIC container you've created to hold this Docker Instance then some version reference which is shown after the colon.

flo@box: ~ $ docker commit 352b8a99458f freemanbach/nixcontainer:0.0.1

Before pushing your local docker instance to the cloud, you may want to consider using this command to authenticate with Docker Cloud service

flo@box: ~ $ docker login

flo@box: ~ $ docker push freemanbach/nixcontainer:0.0.1


Just something to remember how this works and what it associated with when using a containerized Application such as docker and how it could benefit System Engineering and Administration.

Due to the sake of simplicity, let us use a linux (host) machine called box and with a username as flo. {NameOfYourDockerInstance} is the name of the docker instance you will give on your local machine.

Pulling OS from docker Hub
flo@box: ~ $ docker pull centos # other OS listed on DockerHub

Name your OS instance
flo@box: ~ $ docker run -t -d --name {NameOfYourDockerInstance} centos

Show what is running inside docker
flo@box: ~ $ docker ps

Execute the software inside this Docker instance
flo@box: ~ $ docker exec -it {NameOfYourDockerInstance} bash

Check the status of your docker instances
flo@box: ~ $ docker stats

To Stop/Start Docker Instance
flo@box: ~ $ docker start/stop {NameOfYourDockerInstance}

When and if by chance that you have forgot to add a port to a Docker container. One can search for this file called hostconfig.json in your host OS and make changes to the container you wish to reconfigure. A second approach might be to stop the instance, commit a copy from a source instance to a destination instance then using the run command to finally add the port.

flo@box: ~ $ docker stop cake
flo@box: ~ $ docker commit cake cake2
flo@box: ~ $ docker run -t -d -p 8080:8080 cake2
flo@box: ~ $ docker exec -it cake2 bash