I’m frequently in the situation of being required to quickly create a handful of slides on the go/on the road. The slides are typically used at a quick meeting where I benefit from some supporting slides. Because in the end I can use my laptop for the slides, converting any type of slide source information into slides immediately before using them is an option for me. This allows me to e.g. create slides in the form of text on my phone, then convert them to slides on my laptop. The only requirement I personally have for this is that I can create slides with pretty much anything, so simple text in any form of text editor or note app on a phone would be perfect. This is why I’ve tried e.g. creating slides from markdown (which can be written easily in any editor also on phones), such as with MarkdownPresenter in my previous post. In my opinion, there are a few even simpler and better solutions to this problem. One of them is using pandoc for converting slide source code, such as markdown, into slides. In my opinion, using pandoc has two major advantages:
Pandoc is a well know and widely available tool, no exotic SW is required for creating slides from slide source. Pandoc is the all-in-one solution for this problem. And it can be installed just from your repositories, whichever distribution you use (it’s even available for Mac and Windows). So pandocs is a stable solution for solving this problem.
Pandoc is extremely powerful in the one job it does: converting different forms of documents. With pandoc, slide source can be e.g. markdown, and some formats very similar to classic markdown, as well as other formats (check the pandos demos page!). And pandoc allows different output formats for slides: pdf, html, etc. Means: you can create different versions of the same presentation, e.g. for presenting pdf slides + putting html sidles online, without any overhead.
Pandoc conversion is one command only, hence once you’ve written your slide source code, the overhead for getting actual slides is literally zero. It’s as easy as it sounds: write markdown slides, use one pandoc command to create slides, done.
Here’s some simple markdown source to create slides from:
# Slide 1 title
Some super quickly created demo slides
* Do not need anything else than markdown
* Slides title starts with # (also starts a new slide)
* Bullet points, newlines, empty lines: all standard markdown
* However, can also use other stuff, e.g.:
* Some HTML (e.g. \<center\>)
* When using pandoc beamer, can use latex commands (e.g. \\center, \\large, etc)\dots
# Slide 2 title
\center The slide syntax is so simple that you can quickly create a handful of slides on basically any device in any editor. E.g. on your mobile on the way to the meeting where you need the slides. Right before the meeting starts you use pandoc to create the actual slides from your source.
Using this source, pandoc can create different slides, including Latex Beamer (pdf) and HTML. Two extremely simple and minimalistic examples – out of the many available – are:
1. create minimalistic pdf slides:
pandoc -t beamer sides.md -o slides.pdf
2. create minimalistic continuous text representation as html. This does not split the content into slides, hence allows for effective searching for content details on e.g. an intranet website:
pandoc supports way more more features for creating slides then only those two very basic examples. Be sure to check out the pandoc demos page to see which source you can use with which command to create which set of slides!
MarkdownPresenter is a very simplistic markdown presentation editor and viewer that uses markdown to create pdf slides. Especially its simplicity makes it a good choice for quickly creating some slides – for which other tools would easily be too bloated. To use MarkdownPresenter you just need to:
Is a very simplistic markdown presentation editor and viewer. It can export markdown to pdf slides.
* Download from https://github.com/jsakamoto/MarkdownPresenter
* Start Marp by executing the `Marp` executable file.
# Marp Usage
Marp uses regular markdown, with one exception:
* `---`, `***`, or `___` indicates the end of a slides/start of the next slide, instead of introducing a horizontal line.
Therefore, to make slides, you will use:
* #`, `##`, etc. to create headlines
* *`, `1.`, to create bullet points and enumerations
* `()` to create links
* `!()` to include figures
# Marp Usage
Even tables work as expected:
In two previous posts (post one, post two) we have shown how to use git latexdiff and latexdiff to visually highlight the differences of a Latex file in between its different versions in the form of a Latex-compiled diff file. For comparing arbitrary textfiles instead of Latex files, no Latex-based visual comparison is available. However, git itself ships a nice little tool to compare two different versions of the same textfile and highlight its differences: git difftool. It basically does two things for you:
Checkout the versions of the file to compare
Fire up your diff program to show you/highlight the differences
All you need to do is specify the file and the versions you want to compare.
git difftool is part of the git installation. However, it needs to know which diff program you want to use when calling git difftool. You can specify this by adding the following snippet to your ~/.gitconfig:
[diff] tool = MY-DIFF-TOOL
where MY-DIFF-TOOL could be vimdiff, kdiff3, or similar (ensure the tool you specify is installed and available in the PATH on your machine). Now git difftool should be ready for usage:
git difftool has a simple call syntax (in fact the same as git diff). The command you might want to compare files might be similar to this:
git difftool COMMIT1 COMMIT2 FILE
If you use
as COMMIT1 and leave out COMMIT2 you will see the differences between the file in its current, possibly unstaged/uncommited stage and its last commit. If you specify a hash for COMMIT1 and use
as COMMIT2 you will see the differences of the current file to the specified commit instead. If you specify hashes for both COMMIT1 and COMMIT2 you will see the differences between those two versions of the file.
On Ubuntu, if you suddenly cannot resolve DNS addresses anymore, though your network connection is up, you might just have run into a problem with dnsmasq (a local DNS server) that is used by NetworkManager. This post is about disabling dnsmasq and using the DNS servers advertised by your network directly instead.
Is it a DNS problem?
A good indication that you are facing a DNS problem on your machine is when you are connected to a network (meaning you still have an IP address assigned), but your internet connection suddenly stops working, and you are unable to ping DNS addresses like www.google.com:
ping www.google.com # unknown host
but you are still able to ping IP-addresses like 126.96.36.199:
ping188.8.131.52 # works fine
This looks like you are not able to resolve DNS addresses any more – but you are still correctly connected to your network and to the internet.
What could cause this problem on Ubuntu?
Ubuntu uses NetworkManager, which in turn uses dnsmasq: a local DNS server running on your machine. As dnsmasq is started locally, redirecting DNS requests to the local address 127.0.0.1 would be fine in such a setup. You can see this being used by looking at your /etc/resolv.conf file: if it shows the following line, a local DNS server is in use:
However, with certain setups you might run into problem when using this way of resolving DNS addresses. This might be the case e.g. when you are using a different program in parallel to manage your connections, which is not cooperating with dnsmasq well.
What could be the solution?
An easy solution is to try if your machine works fine without using dnsmasq. In this case DNS with be resolved not using a local dnsmasq DNS server but using the DNS server advertised via DHCP. To disable dnsmasq with NetworkManager, comment out the dns=dnsmasq line in /etc/NetworkManager/NetworkManager.conf:
This will prevent NetworkManager from starting a dnsmasq instance, hence will prevent it from locally resolving DNS lookups. DNS lookups will then be done using the DNS servers advertised to your machine from DHCP. You can check that this actually worked as follows: after a reboot (clears all possible cached DNS servers info on your machine) and connecting to a network you should see actual DNS servers showing up in /etc/resolv.conf (with actual IP-addresses of course) in the place of the former 127.0.0.1 one:
We frequently need to mail pdf files that are too big for regular mail services, such as a 40MB pdf file with a maximum 10MB send restriction. In such situations quick and effective pdf compression comes in handy that does not reduce the quality to a level of the file becoming unusable.
Under Linux and Unix-like systems GhostScript is one of the most powerful tools (probably the most powerful one) to manipulate files like pdf, ps, etc. If you are on a Linux/Unix-like system and need a job with pdf files done check out the “How to use GhostScript” site. It’s not unlikely that GhostScript already has a built in solution to your problem. Consequently it also features a way of effectively compressing pdf files with different options and settings.
GhostScript pdf compression
Effective pdf compression is possible with GhostScript using a single command (adapted from here and here):
-dPDFSETTINGS= specifies the quality level of the pdf file. This effects embedded pixel graphics (also adapts embedded color profiles) and is the main option for controlling the compression level, thereby the resulting file size:
-dPDFSETTINGS=/screen (72 dpi images)
-dPDFSETTINGS=/ebook (150 dpi images)
-dPDFSETTINGS=/printer (300 dpi images)
-dPDFSETTINGS=/prepress (300 dpi images, color preserving)
Other switches: the output is written as pdf (-sDEVICE=pdfwrite), the pdf compatibility level is set to 1.4 (-dCompatibilityLevel=1.4), the process does not require user interaction (-dNOPAUSE and -dQUIET), and GhostScript skips the startup message (-dQUIET).
Sometimes there is a need to install a specific version of software for it to be compatible to other software, already have certain bug fixed, or because we need specific features. And sometimes there are multiple versions of the software available in the repos of your Linux machine, which allows for selecting the version that you want to have instead of just installing the default version. With apt this is possible – however, be careful to not install incompatible versions of software that causes conflicts with other things on your machine.
We are now going to demonstrate installing a specific version of fish, the user friendly interactive shell. At the time of writing there is only version 184.108.40.206 available in the Ubuntu 16.04.3 repositories, but we need at least version 2.3.x, as this is where fish became compatible to fzf, the fuzzy command-line finder (which is a very helpful tool btw). At the time of writing the latest version of fish in the Ubuntu PPA of fish 2.x is 2.7.x. We therefore at first need to add its PPA to make the version we need available to apt:
sudo apt-add-repository ppa:fish-shell/release-2
sudo apt-get update
sudo apt-get install fish
Now, to see all versions of the desired package available for installation on your machine with apt execute:
apt-cache madison fish
In our example we see:
fish | 2.7.0-1~xenial | http://ppa.launchpad.net/fish-shell/release-2/ubuntu xenial/main amd64 Packages
fish | 2.2.0-3 | http://fi.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
For the night owls amongst us: looking at our screens during long night sessions – thereby at the whole spectrum of visible light from blue to red – means accepting a bunch of unhealthy drawbacks. For example: as the blue part of the spectrum of visible light is usually only visible during daytime our bodies have adapted to using it as a “clock” mechanism: while we see blue light we i.e. release hormones that keep us awake. That can e.g. make falling asleep more difficult/can cause insomnia when we have worked in front of a monitor for a couple of hours until late in the night. Therefore, besides dimming monitors (reduces the contrast to the dark surroundings), reducing the amount of blue light during night sessions and causing a tinge of red on our monitors is a good thing – not only for our eyes, but also for our bodies as a whole.
How to cause a redshift on your screen
Luckily for the night owls amongst us this problem has been recognized and addressed for different platforms already: for example, even many mobile phones provide for the functionality to redshift screen colors nowadays. For Linux, applications that do this job for us include redshift, openlux, and f.lux (the latter seems to be the original thing but is closed source). My current personal recommendation would be to use redshift, as it’s open source, included in the repositories of all major distributions, easy configurable, and does the job with a single command on your terminal. One word for the curious: from the technical perspective redshift relies on an X server extension to function.
In terms of features redshift automatically adjusts screen colors to better match what the natural light would be – which would be caused by a sun that has probably already set hours ago, meaning there should be no blue light at all. redshift uses your location and time to adjust the screen colors. It provides for deriving the location automatically, but I personally like providing it by hand (see example below). Next to the location the second important setting is what you want your screen colors to be during daytime and nighttime (the color temperature in K to be exact). Both settings can be provided as parameters when calling redshift from the terminal: