Inflation rates of crypto currencies

Today I researched the money supply rates for some crypto currencies, namely Bitcoin, Litecoin, Dogecoin, Dash and Monero.
They use very different money supplay algorithms.

Bitcoin / Litecoin
The block reward is constant, but it halves every 210000 blocks (840000 for litecoin).
Simply 10000 doge for every block.
The block reward depends on the actual mining difficulty. It is 0.9*max(5, min(25, 2222222/((difficulty+2600)/9)^2)) every 210240th block the reward is reduced by 1/14th
The block reward of monero depends on the currently existing amount of coins. It looks like this max(0.6, (2^64 - 1 - coins*1000000000000) / 2^19 / 1000000000000)

With these facts we can forecast the inflation rates of each currecncy.

inflation rates of crypto currencies

estimated annual inflation rates of crypto currencies

Date Bitcoin Litecoin Dogecoin Dash Monero
2016-03-27 8.6% 11.7% 5.1% 12.2% 30.8%
2017-03-27 4.1% 10.4% 4.8% 10.1% 15%
2018-03-27 3.9% 9.5% 4.6% 8.5% 8.1%
2019-03-27 3.7% 8.6% 4.4% 7.9% 4.6%
2020-03-27 3.6% 4.1% 4.2% 6.8% 2.7%
2022-03-27 1.7% 3.8% 3.9% 5.2% 0.92%
2025-03-27 0.83% 1.7% 3.5% 3.7% 0.85%
2030-03-27 0.4% 0.82% 3% 2.2% 0.81%

OpenCL on Kaveri

Kaveri APUs from AMD are the first APUs with hUMA support. This is a big step for OpenCL development. We can now read and write directly from the GPU to global RAM. Copying huge amount of memory from RAM to GPU memory and back is now needless. I want to give a short overview of the characteristics of OpenCL programming with Kaveri and its performance.


By default your Kernel is compiled with 32 bit address width. You should set the environment variable GPU_FORCE_64BIT_PTR to 1 to access the complete RAM. The GPU device of my Kaveri (A10-7850k) has the following specifications:

Device Name: Spectre (AMD Accelerated Parallel Processing, OpenCL 1.2 AMD-APP (1445.5))
Address Bits: 64
Little Endian: true
Global Memory Size: 512 mb
Base Address Alignment Bits: 2048
Global Memory Cache Size: 16 kb
Local Memory Size: 32 kb
Clock Frequency: 720 MHz
Compute Units: 8
Constant Buffer Size: 64 kb
Max Workgroup Size: 256

Since the mentioned GPU has 512 processing units, we get a wave front size of 64 which is typical for AMD. The global memory size is a bit confusing. It pretends that we can only access 512 MB global memory, which is not true.

Continue reading

C++ Web Toolkit Wt (witty)

During the last days I made a simple web application, which serves you chess puzzles to solve. You can see it here
Before starting this little project I decidet to use Wt as a framework to learn something new. Also it seemed to be very easy to use and had similarities to Qt which I am already familiar with.
I want to share my impressions after my first little project with it. I will start with my negativ impressions:

  • The ownership of objects does not feel clear at the beginning. Although there are rules for ownership, there are special cases which may not be intuitive at the beginning.
  • Even for my simple application there were some Wt related bugs that cost me some time.
  • Because Wt is a serverside framework, your UI could feel a bit slow. Wt solves this for some standard usecases. For example a menu widget can preload all its child widgets and switch its content client side. If you want more special things, e.g. change the border color of a widget when it is clicked, you either accept the latency or you have to write javascript code for this. The good thing is Wt offers good ways to integrate javascript.
  • You can choose the layout between two predefined styles and a “bootstrap” theme (in version 1, 2 and 3). In my opinion there could be more predefined themes, although I did not realy miss them.

Of course there are also many good things to mention:

  • Wt comes with many basic examples you can learn from.
  • It is very active developed. All bugs I reported got fixed within one week.
  • All communication between client and server is handled by Wt. This makes web development feel like developing for a desktop.
  • You dont have to care what browser the client is using. Wt handles this.

If I would use Wt again for my next project depends on the complexity of the UI and the communication with the backend. If the UI is extremly complex I would probably choose another framework like GWT, because to make the UI fast you need to have client side code. If the UI is not to complex, and the focus lies on the backend, I would definitely use Wt again.

Chess Tactics Server

Recently I bought a book with about 1000 chess puzzles. But many puzzles were just incorrect. It is frustrating to spend alot of time to find a solution and later you recognize there is no solution. Then I got the idea to write a program that generates correct puzzles from a game database. But what makes a good chess puzzle? In my opinion a position is a good chess puzzle if

  • the best move is clearly the best, that means
    • the best move is winning and the second best move is not winning
    • or the best move is not losing and the second best move is losing
  • the best move is not abvious, that means
    • the best move was not played in that game
    • and the best move is not found by a chess engine at very low depth

To decide which move is winning or losing Stockfish is used.
Surprisingly with these simple rules I could find only one puzzle in about 200 games. Nevertheless my program generated already many thousends of puzzles.
I decided to create a simple Webapp on top of this puzzle database. The link is

Stockfish Dev Builds FAQ

How often is that site updated?

The site is updated automatically. Every five minutes both repositories are pulled. If there is a new commit on the master branch the binaries are build. This takes some minutes.

What are the differences between all those versions?

  • Windows 32 runs on very old 32 bit versions of windows but it is relativly slow.
  • Windows x64 runs only on 64 bit versions of windows. It is clearly faster.
  • Windows x64 for modern computers additionally requires a cpu which supports the popcnt instruction. Most modern cpus do it. This popcnt instruction speeds up some calculations.
  • Windows x64 for Haswell CPUs additionally uses the bmi2 instruction set which speeds up some calculations further. Until now only Intels Haswell cpus support this instruction set. All future cpus probably will also have that support.
  • Linux … is obvious.

Which version is best for me?

In top-down order. Use the first version that works for you. If you are not sure you can start the program and type “bench”. If it did not crash it works on your computer.

Is there a constant link to the latest version?

Windows x64 for Haswell CPUs
Windows x64 for modern computers
Windows x64
Windows 32
Linux x64 for Haswell CPUs
Linux x64 for modern computers
Linux x64

Help needed: Stockfish for Haswell

It seems that the special SSE4.2 compile is not faster then the compile for modern computers. I decided to replace the SSE4.2 version by a special version for Haswell which I expact to be measurable faster. Since I dont have access to an Haswell computer I need your help to verify my assumptions.
If you have an Haswell computer, start any of the following executables and type “bench”. After that post the results for each version as reply.

Stockfish Windows x64 for Haswell + profiling
Stockfish Windows x64 for Haswell
Stockfish Windows x64 for modern computers + sse4.2
Stockfish Windows x64 for modern computers

Json Spirit not thread safe on Ubuntu

A few days ago I noticed a lot of fast crashes of my program after adding multithreading to it. GDB showed the segfault in json_spirit::read(). So I wondered if that is a thread safety issue. I could not find any information about whether the json spirit package for ubuntu is thread safe or not. A simple test reproduces this behaviour:

#include <json_spirit.h>
#include <thread>
#include <vector>
#include <mutex>
void test()
	json_spirit::mValue v;
	for(int i = 0; i < 1000; ++i)
		json_spirit::read("{}", v);	
int main()
	std::vector<std::thread> threads;
	for(int i = 0; i < 8; ++i)
	for (auto& th : threads)
clang++ test.cxx -pthread -ljson_spirit -std=c++11 -O0 -g3

The test program crashes immediately at least on my system (Ubuntu 14.04, libjson-spirit-dev 4.05-1.1). The Backtrace looks like this:

#0  0x00007ffff0000960 in ?? ()
#1  0x000000000042e0de in __gnu_cxx::__normal_iterator<char const*, std::string> json_spirit::read_range_or_throw<__gnu_cxx::__normal_iterator<char const*, std::string>, json_spirit::Value_impl<json_spirit::Config_map<std::string> > >(__gnu_cxx::__normal_iterator<char const*, std::string>, __gnu_cxx::__normal_iterator<char const*, std::string>, json_spirit::Value_impl<json_spirit::Config_map<std::string> >&) ()
#2  0x000000000042e1bc in bool json_spirit::read_range<__gnu_cxx::__normal_iterator<char const*, std::string>, json_spirit::Value_impl<json_spirit::Config_map<std::string> > >(__gnu_cxx::__normal_iterator<char const*, std::string>&, __gnu_cxx::__normal_iterator<char const*, std::string>, json_spirit::Value_impl<json_spirit::Config_map<std::string> >&) ()
#3  0x00000000004080fd in json_spirit::read(std::string const&, json_spirit::Value_impl<json_spirit::Config_map<std::string> >&) ()
#4  0x00000000004028fb in test () at test.cxx:10
#5  0x0000000000404f3f in std::_Bind_simple<void (*())()>::_M_invoke<>(std::_Index_tuple<>) (this=0x6f3060) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:1731
#6  0x0000000000404f15 in std::_Bind_simple<void (*())()>::operator()() (this=0x6f3060) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/functional:1720
#7  0x0000000000404eec in std::thread::_Impl<std::_Bind_simple<void (*())()> >::_M_run() (this=0x6f3048) at /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/thread:115
#8  0x00007ffff7b87bf0 in ?? () from /usr/lib/x86_64-linux-gnu/
#9  0x00007ffff73a4182 in start_thread (arg=0x7ffff6fd5700) at pthread_create.c:312
#10 0x00007ffff70d130d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Adding a mutexed thread safe wrapper function which calls json_spirit::read() fixed the problem:

#include <json_spirit.h>
#include <thread>
#include <vector>
#include <mutex>
bool js_read(const std::string& js, json_spirit::mValue& v)
	static std::mutex mtx;
	std::lock_guard<std::mutex> lock(mtx);
	return json_spirit::read(js, v);
void test()
	json_spirit::mValue v;
	for(int i = 0; i < 1000; ++i)
		js_read("{}", v);	
int main()
	std::vector<std::thread> threads;
	for(int i = 0; i < 8; ++i)
	for (auto& th : threads)

Note that the initialization of the static local mutex is only thread safe since C++11. Alternativly you have to use a global mutex.