tag:blogger.com,1999:blog-34514135874133759152024-03-13T18:48:27.672+00:00adrianwalker.orgProgramming & ThatUnknownnoreply@blogger.comBlogger85125tag:blogger.com,1999:blog-3451413587413375915.post-62510838873966561692023-11-05T20:12:00.012+00:002023-11-07T20:32:00.462+00:00PrivateGPT Installation Notes<p>
These notes work as of 07/11/2023 using Xubuntu 22.04 - your milage may vary.
</p>
<h3>PrivateGPT</h3>
<p>
PrivateGPT is a production-ready AI project that allows you to ask questions about your documents using the power of Large Language Models (LLMs), even in scenarios without an Internet connection. 100% private, no data leaves your execution environment at any point.
</p>
<h3>Repo</h3>
<a href="https://github.com/imartinez/privateGPT">https://github.com/imartinez/privateGPT</a>
<h3>Docs</h3>
<a href="https://docs.privategpt.dev">https://docs.privategpt.dev</a>
<h3>Install</h3>
<a href="https://docs.privategpt.dev/#section/Installation-and-Settings">https://docs.privategpt.dev/#section/Installation-and-Settings</a>
<h4>Install git</h4>
<pre class="brush:bash">sudo apt install git</pre>
<h4>Install python</h4>
<pre class="brush:bash">sudo apt install python3</pre>
<h4>Install pip</h4>
<pre class="brush:bash">sudo apt install python3-pip</pre>
<h4>Install pyenv</h4>
<pre class="brush:bash">
cd ~
curl https://pyenv.run | bash
</pre>
<p>
Add the commands to ~/.bashrc by running the following in your terminal:
</p>
<pre class="brush:bash">
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
</pre>
<p>
If you have ~/.profile, ~/.bash_profile or ~/.bash_login, add the commands there as well. If you have none of these, add them to ~/.profile:
</p>
<pre class="brush:bash">
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.profile
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.profile
echo 'eval "$(pyenv init -)"' >> ~/.profile
</pre>
<p>Restart your shell for the changes to take effect.</p>
<h4>Install Python 3.11</h4>
<pre class="brush:bash">
pyenv install 3.11
pyenv local 3.11
</pre>
<p>If you see these errors and warnings, install the required dependencies:</p>
<pre class="brush:bash">
ModuleNotFoundError: No module named '_bz2'
WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/adrian/.pyenv/versions/3.11.6/lib/python3.11/curses/__init__.py", line 13, in <module>
from _curses import *
ModuleNotFoundError: No module named '_curses'
WARNING: The Python curses extension was not compiled. Missing the ncurses lib?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/adrian/.pyenv/versions/3.11.6/lib/python3.11/ctypes/__init__.py", line 8, in <module>
from _ctypes import Union, Structure, Array
ModuleNotFoundError: No module named '_ctypes'
WARNING: The Python ctypes extension was not compiled. Missing the libffi lib?
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'readline'
WARNING: The Python readline extension was not compiled. Missing the GNU readline lib?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/adrian/.pyenv/versions/3.11.6/lib/python3.11/ssl.py", line 100, in <module>
import _ssl # if we can't import it, let the error propagate
^^^^^^^^^^^
ModuleNotFoundError: No module named '_ssl'
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?
ModuleNotFoundError: No module named '_sqlite3'
WARNING: The Python sqlite3 extension was not compiled. Missing the SQLite3 lib?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/adrian/.pyenv/versions/3.11.6/lib/python3.11/tkinter/__init__.py", line 38, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named '_tkinter'
WARNING: The Python tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/adrian/.pyenv/versions/3.11.6/lib/python3.11/lzma.py", line 27, in <module>
from _lzma import *
ModuleNotFoundError: No module named '_lzma'
WARNING: The Python lzma extension was not compiled. Missing the lzma lib?
</pre>
<p>Install dependencies:</p>
<pre class="brush:bash">
sudo apt update
sudo apt install libbz2-dev
sudo apt install libncurses-dev
sudo apt install libffi-dev
sudo apt install libreadline-dev
sudo apt install libssl-dev
sudo apt install libsqlite3-dev
sudo apt install tk-dev
sudo apt install liblzma-dev
</pre>
<p>Try installing Python 3.11 again:</p>
<pre class="brush:bash">
pyenv install 3.11
pyenv local 3.11
</pre>
<h4>Install pipx</h4>
<pre class="brush:bash">
python3 -m pip install --user pipx
python3 -m pipx ensurepath
</pre>
<p>Restart your shell for the changes to take effect.</p>
<h4>Install poetry</h4>
<pre class="brush:bash">pipx install poetry</pre>
<h4>Clone the privateGPT repo</h4>
<pre class="brush:bash">
cd ~
git clone https://github.com/imartinez/privateGPT
cd privateGPT
</pre>
<h4>Install dependencies</h4>
<pre class="brush:bash">poetry install --with ui,local</pre>
<h4>Download Embedding and LLM models</h4>
<pre class="brush:bash">poetry run python scripts/setup</pre>
<h4>Run the local server</h4>
<pre class="brush:bash">PGPT_PROFILES=local make run</pre>
<h4>Navigate to the UI</h4>
<a href="http://localhost:8001/">http://localhost:8001/</a>
<h4>Shutdown</h4>
<pre class="brush:bash">ctrl-c</pre>
<h3>GPU Acceleration</h3>
<h4>Verify the machine has a CUDA-Capable GPU</h4>
<pre class="brush:bash">lspci | grep -i nvidia</pre>
<h4>Install the NVIDIA CUDA Toolkit</h4>
<pre class="brush:bash">
sudo apt update
sudo apt upgrade
sudo apt install nvidia-cuda-toolkit
</pre>
<h4>Verify installation</h4>
<pre class="brush:bash">
nvcc --version
nvidia-smi
</pre>
<h4>Install llama.cpp with GPU support</h4>
<p>Find your version of llama_cpp_python:</p>
<pre class="brush:bash">
poetry run pip list | grep llama_cpp_python
</pre>
<p>
Substitue the version in the next command:
</p>
<pre class="brush:bash">
cd ~/privateGPT
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python==0.2.13
</pre>
<p>
If you see an error like this, try specifitying the location of <i>nvcc:</i>
</p>
<pre class="brush:bash">
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [35 lines of output]
*** scikit-build-core 0.6.0 using CMake 3.27.7 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmp591ifmq4/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found CUDAToolkit: /usr/local/cuda/include (found version "12.3.52")
-- cuBLAS found
-- The CUDA compiler identification is unknown
CMake Error at /tmp/pip-build-env-h3vy91ne/normal/lib/python3.11/site-packages/cmake/data/share/cmake-3.27/Modules/CMakeDetermineCUDACompiler.cmake:603 (message):
Failed to detect a default CUDA architecture.
Compiler output:
Call Stack (most recent call first):
vendor/llama.cpp/CMakeLists.txt:258 (enable_language)
-- Configuring incomplete, errors occurred!
*** CMake configuration failed
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
</pre>
<p>
Build with the location of <i>nvcc:</i>
</p>
<pre class="brush:bash">
CUDACXX=/usr/local/cuda-12/bin/nvcc CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python==0.2.13
</pre>
<h4>Start the server</h4>
<pre class="brush:bash">
cd ~/privateGPT
pyenv local 3.11
PGPT_PROFILES=local make run
</pre>
<p>If you see this error, configure the number of layers offloaded to VRAM:</p>
<pre class="brush:bash">
CUDA error 2 at /tmp/pip-install-pqg0kmzj/llama-cpp-python_a94e4e69cdce4224adec44b01749f74a/vendor/llama.cpp/ggml-cuda.cu:7636: out of memory
current device: 0
make: *** [Makefile:36: run] Error 1
</pre>
<p>Configure the number of layers offloaded to VRAM:</p>
<pre class="brush:bash">
cp ~/privateGPT/private_gpt/components/llm/llm_component.py ~/privateGPT/private_gpt/components/llm/llm_component.py.backup
vim ~/privateGPT/private_gpt/components/llm/llm_component.py
</pre>
<p>change:</p>
<pre class="brush:python">
model_kwargs={"n_gpu_layers": -1},
</pre>
<p>to:</p>
<pre class="brush:python">
model_kwargs={"n_gpu_layers": 10},
</pre>
<p>Try to start the server again:</p>
<pre class="brush:bash">
cd ~/privateGPT
pyenv local 3.11
PGPT_PROFILES=local make run
</pre>
<p>
If the server is using the GPU you will see something like this in the output:
</p>
<pre class="brush:bash">
...
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA RTX A1000 Laptop GPU, compute capability 8.6
...
llm_load_tensors: ggml ctx size = 0.11 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 2902.35 MB
llm_load_tensors: offloading 10 repeating layers to GPU
llm_load_tensors: offloaded 10/35 layers to GPU
llm_load_tensors: VRAM used: 1263.12 MB
...............................................................................................
llama_new_context_with_model: n_ctx = 3900
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 487.50 MB
llama_build_graph: non-view tensors processed: 740/740
llama_new_context_with_model: compute buffer total size = 282.00 MB
llama_new_context_with_model: VRAM scratch buffer: 275.37 MB
llama_new_context_with_model: total VRAM used: 1538.50 MB (model: 1263.12 MB, context: 275.37 MB)
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
...
</pre>
<h3>Ingest</h3>
For example, to download and ingest an html copy of <a href="https://github.com/basho-labs/little_riak_book">A Little Riak Book</a>:
<pre class="brush:bash">
cd ~/privateGPT
mkdir ${PWD}/ingest
wget -P ${PWD}/ingest https://raw.githubusercontent.com/basho-labs/little_riak_book/master/rendered/riaklil-en.html
poetry run python scripts/ingest_folder.py ${PWD}/ingest
</pre>
<h3>Configure Temperature</h3>
<pre class="brush:bash">
cp ~/privateGPT/private_gpt/components/llm/llm_component.py ~/privateGPT/private_gpt/components/llm/llm_component.py.backup
vim ~/privateGPT/private_gpt/components/llm/llm_component.py
</pre>
<p>change:</p>
<pre class="brush:python">
temperature=0.1
</pre>
<p>to:</p>
<pre class="brush:python">
temperature=0.2
</pre>
<p>Restart the server</p>
<pre class="brush:bash">
crtl+c
cd ~/privateGPT
pyenv local 3.11
PGPT_PROFILES=local make run
</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-36118896569365259042023-09-10T21:35:00.006+01:002023-09-11T10:58:47.121+01:00Riak-like secondary index queries for S3<p>
This is an idea for how to provide <a href="https://en.wikipedia.org/wiki/Database_index#Secondary_index">secondary index queries</a>, similar to <a href="https://docs.riak.com/riak/kv/latest/developing/usage/secondary-indexes/index.html">Riak 2i</a>, on top of <a href="https://aws.amazon.com/s3/">Amazon S3</a>, using nothing but S3, <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html">boto3</a> and some Python.
</p>
<p>
This code hasn't been anywhere near a production environment, never benchmarked, only processed trivial amounts of data and tested only against <a href="https://localstack.cloud/">localstack</a>. It's not even commented. As such, it should not be used by anybody for any reason - ever.
</p>
<p>
If you do give it a try, let me know how it went.
</p>
<h4>s32i.sh</h4>
<pre class="brush:python">
from concurrent.futures.thread import ThreadPoolExecutor
import re
from botocore.exceptions import ClientError
class S32iDatastore():
__EXECUTOR = ThreadPoolExecutor(max_workers=os. cpu_count() - 1)
INDEXES_FOLDER = 'indexes'
LIST_OBJECTS = 'list_objects_v2'
def __init__(self, s3_resource, bucket_name):
self.s3_resource = s3_resource
self.bucket_name = bucket_name
def __run_in_thread(self, fn, *args):
return self.__EXECUTOR.submit(fn, *args)
def get(self, key):
record = self.s3_resource.Object(self.bucket_name, key).get()
indexes = record['Metadata']
data = record['Body'].read()
return data, indexes
def head(self, key):
record = self.s3_resource.meta.client.head_object(Bucket=self.bucket_name, Key=key)
return record['Metadata']
def exists(self, key):
try:
self.head(key)
return True
except ClientError:
return False
def put(self, key, data='', indexes={}):
self.__run_in_thread(self.create_secondary_indexes, key, indexes)
return self.s3_resource.Object(self.bucket_name, key).put(
Body=data,
Metadata=indexes)
def delete(self, key):
self.__run_in_thread(self.remove_secondary_indexes, key, self.head(key))
return self.s3_resource.Object(self.bucket_name, key).delete()
def create_secondary_indexes(self, key, indexes):
for index, values in indexes.items():
for value in values.split(','):
self.put(f'{self.INDEXES_FOLDER}/{index}/{value}/{key}')
def remove_secondary_indexes(self, key, indexes):
for index, values in indexes.items():
for value in values.split(','):
self.s3_resource.Object(self.bucket_name, f'{self.INDEXES_FOLDER}/{index}/{value}/{key}').delete()
def secondary_index_range_query(self,
index,
start, end=None,
page_size=1000, max_results=10000,
term_regex=None, return_terms=False):
if end is None:
end = start
if term_regex:
pattern = re.compile(f'^{self.INDEXES_FOLDER}/{index}/{term_regex}$')
start_key = f'{self.INDEXES_FOLDER}/{index}/{start}'
end_key = f'{self.INDEXES_FOLDER}/{index}/{end}'
paginator = self.s3_resource.meta.client.get_paginator(self.LIST_OBJECTS)
pages = paginator.paginate(
Bucket=self.bucket_name,
StartAfter=start_key,
PaginationConfig={
'MaxItems': max_results,
'PageSize': page_size})
for page in pages:
for result in page['Contents']:
result_key = result['Key']
if result_key[0:len(end_key)] > end_key:
return
if term_regex and not pattern.match(result_key):
continue
parts = result_key.split('/')
if return_terms:
yield (parts[-1], parts[-2])
else:
yield parts[-1]
</pre>
<h4>s32i_test.sh</h4>
<pre class="brush:python">
import json
import unittest
import boto3
from s32i import S32iDatastore
class S32iDatastoreTest(unittest.TestCase):
LOCALSTACK_ENDPOINT_URL = "http://localhost.localstack.cloud:4566"
TEST_BUCKET = 's32idatastore-test-bucket'
@classmethod
def setUpClass(cls):
cls.s3_resource = cls.create_s3_resource()
cls.bucket = cls.create_bucket(cls.TEST_BUCKET)
cls.datastore = S32iDatastore(cls.s3_resource, cls.TEST_BUCKET)
cls.create_test_data()
@classmethod
def tearDownClass(cls):
cls.delete_bucket()
@classmethod
def create_s3_resource(cls, endpoint_url=LOCALSTACK_ENDPOINT_URL):
return boto3.resource(
's3',
endpoint_url=endpoint_url)
@classmethod
def create_bucket(cls, bucket_name):
return cls.s3_resource.create_bucket(Bucket=bucket_name)
@classmethod
def delete_bucket(cls):
cls.bucket.objects.all().delete()
@classmethod
def create_test_data(cls):
cls.datastore.put(
'KEY0001',
json.dumps({'name': 'Alice', 'dob': '19700101', 'gender': '2'}),
{'idx-gender-dob': '2|19700101'})
cls.datastore.put(
'KEY0002',
json.dumps({'name': 'Bob', 'dob': '19800101', 'gender': '1'}),
{'idx-gender-dob': '1|19800101'})
cls.datastore.put(
'KEY0003',
json.dumps({'name': 'Carol', 'dob': '19900101', 'gender': '2'}),
{'idx-gender-dob': '2|19900101'})
cls.datastore.put(
'KEY0004',
json.dumps({'name': 'Dan', 'dob': '20000101', 'gender': '1'}),
{'idx-gender-dob': '1|20000101'})
cls.datastore.put(
'KEY0005',
json.dumps({'name': 'Eve', 'dob': '20100101', 'gender': '2'}),
{'idx-gender-dob': '2|20100101'})
cls.datastore.put(
'KEY0006',
json.dumps({'name': ['Faythe', 'Grace'], 'dob': '20200101', 'gender': '2'}),
{'idx-gender-dob': '2|20200101', 'idx-name': 'Faythe,Grace'})
cls.datastore.put('KEY0007', indexes={'idx-same': 'same'})
cls.datastore.put('KEY0008', indexes={'idx-same': 'same'})
cls.datastore.put('KEY0009', indexes={'idx-same': 'same'})
cls.datastore.put(
'KEY9999',
json.dumps({'name': 'DELETE ME', 'dob': '99999999', 'gender': '9'}),
{'idx-gender-dob': '9|99999999'})
def test_get_record(self):
data, indexes = self.datastore.get('KEY0001')
self.assertDictEqual({'name': 'Alice', 'dob': '19700101', 'gender': '2'}, json.loads(data))
self.assertDictEqual({'idx-gender-dob': '2|19700101'}, indexes)
def test_head_record(self):
indexes = self.datastore.head('KEY0002')
self.assertDictEqual({'idx-gender-dob': '1|19800101'}, indexes)
def test_2i_no_results(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '3|30100101')
self.assertListEqual([], list(keys))
def test_2i_index_does_not_exist(self):
keys = self.datastore.secondary_index_range_query('idx-does-not-exist', '3|30100101')
self.assertListEqual([], list(keys))
def test_2i_exact_value(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '2|20100101')
self.assertListEqual(['KEY0005'], list(keys))
def test_2i_gender_2(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '2|')
self.assertListEqual(['KEY0001', 'KEY0003', 'KEY0005', 'KEY0006'], sorted(list(keys)))
def test_2i_gender_2_max_results_2(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '2|', max_results=2)
self.assertListEqual(['KEY0001', 'KEY0003'], sorted(list(keys)))
def test_2i_gender_1_dob_19(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '1|19')
self.assertListEqual(['KEY0002'], list(keys))
def test_2i_gender_2_dob_19(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '2|19')
self.assertListEqual(['KEY0001', 'KEY0003'], sorted(list(keys)))
def test_2i_gender_2_dob_1990_2000(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '2|1990', '2|2000')
self.assertListEqual(['KEY0003'], list(keys))
def test_2i_term_regex(self):
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '1|', '2|', term_regex='[1|2]\|20[1|2]0.*')
self.assertListEqual(['KEY0005', 'KEY0006'], list(keys))
def test_2i_return_terms(self):
key_terms = self.datastore.secondary_index_range_query(
'idx-gender-dob', '1|', '2|',
return_terms=True)
self.assertListEqual([
('KEY0001', '2|19700101'),
('KEY0002', '1|19800101'),
('KEY0003', '2|19900101'),
('KEY0004', '1|20000101'),
('KEY0005', '2|20100101'),
('KEY0006', '2|20200101')],
sorted(list(key_terms)))
def test_2i_term_regex_return_terms(self):
key_terms = self.datastore.secondary_index_range_query(
'idx-gender-dob', '1|', '2|',
term_regex='[1|2]\|20[1|2]0.*',
return_terms=True)
self.assertListEqual([('KEY0005', '2|20100101'), ('KEY0006', '2|20200101')], list(key_terms))
def test_exists(self):
self.assertTrue(self.datastore.exists('KEY0001'))
self.assertFalse(self.datastore.exists('1000YEK'))
def test_multiple_index_values(self):
indexes = self.datastore.head('KEY0006')
self.assertDictEqual({'idx-gender-dob': '2|20200101', 'idx-name': 'Faythe,Grace'}, indexes)
keys = self.datastore.secondary_index_range_query('idx-name', 'Faythe')
self.assertListEqual(['KEY0006'], list(keys))
keys = self.datastore.secondary_index_range_query('idx-name', 'Grace')
self.assertListEqual(['KEY0006'], list(keys))
def test_multiple_keys_same_index(self):
keys = self.datastore.secondary_index_range_query('idx-same', 'same')
self.assertListEqual(['KEY0007', 'KEY0008', 'KEY0009'], sorted(list(keys)))
def test_delete(self):
self.assertTrue(self.datastore.exists('KEY9999'))
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '9|99999999')
self.assertListEqual(['KEY9999'], list(keys))
self.datastore.delete('KEY9999')
self.assertFalse(self.datastore.exists('KEY9999'))
keys = self.datastore.secondary_index_range_query('idx-gender-dob', '9|99999999')
self.assertListEqual([], list(keys))
</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-87594768247160301202021-03-27T22:44:00.016+00:002021-04-20T13:01:26.230+01:00National Statistics Postcode Lookup Radius Search With Redis<p>
Of all the questions posed by Plato, the profundity of one stands head and shoulders above the rest:
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVhJzj_S9IiRF0Z0bw5i0OtWonjje1ADBMV6WVJHfH6KMtjdeSXmkXyDmzyQqBmGouHjOLll96pIiccIs5tOgMusNelbqOrfvLGYT8iv4C0XSlLYObwtR45XYA1jeeR-AJaDb_9j4KQ6I/s620/plato.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="320" data-original-height="620" data-original-width="500" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVhJzj_S9IiRF0Z0bw5i0OtWonjje1ADBMV6WVJHfH6KMtjdeSXmkXyDmzyQqBmGouHjOLll96pIiccIs5tOgMusNelbqOrfvLGYT8iv4C0XSlLYObwtR45XYA1jeeR-AJaDb_9j4KQ6I/s320/plato.jpg"/></a></div>
<p>
To answer Plato's question we're going need some geographic information about UK postcodes:
<p/>
<h4>National Statistics Postcode Lookup</h4>
<p>
This data set is probably the right one for the job. It's from a reliable source, it contains longitude and lattitude for 2.6 million postcodes and best of all - it's free.
</p>
<p>
The data is downloadable from <a href="https://geoportal.statistics.gov.uk">geoportal.statistics.gov.uk</a>, first item under the 'Postcodes' menu. The dataset appears to be released quarterly every February, May, August and November.
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizxNKHgt36Zqh0LTJXglgIO29-0oPfx1qs-fh_IdHjhB2pOMPgYmuDj4CeRdabPQzUFM6KVJ7hnvjI0ZCPLC_N-oiR6Oh6i55ZQEB1R3WZmeVLg_73Kff3BEFg1srcEvXMW51BEZJDtZc/s510/ons.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="221" data-original-width="510" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizxNKHgt36Zqh0LTJXglgIO29-0oPfx1qs-fh_IdHjhB2pOMPgYmuDj4CeRdabPQzUFM6KVJ7hnvjI0ZCPLC_N-oiR6Oh6i55ZQEB1R3WZmeVLg_73Kff3BEFg1srcEvXMW51BEZJDtZc/s320/ons.png"/></a></div>
<p>
At the time of writing, the latest dowload link points to:
</p>
<p>
<a href="https://www.arcgis.com/sharing/rest/content/items/7606baba633d4bbca3f2510ab78acf61/data">www.arcgis.com/sharing/rest/content/items/7606baba633d4bbca3f2510ab78acf61/data</a>
</p>
<p>
Interestingly, the domain is <a href="https://www.arcgis.com">www.arcgis.com</a>, the website for a well known commercial Geographic Information System - ArcGIS, from <a href="https://www.esriuk.com/en-gb/home">Esri</a>.
</p>
<h4>Other data sets are available</h4>
<p>
<i>Code-Point Open</i>
</p>
<p>
<a href="https://www.ordnancesurvey.co.uk/business-government/products/code-point-open">Code-Point Open</a> from <a href="https://www.ordnancesurvey.co.uk">Ordnance Survey</a>, free but location information is coded as <a href="https://en.wikipedia.org/wiki/Grid_reference_system">Eastings and Northings</a>, not ideal for this project.
</p>
<p>
<i>PostZon</i>
</p>
<p>
Part of the <a href="https://www.poweredbypaf.com/">PAF</a> datasets from Royal Mail, mentioned in the <a href="https://www.poweredbypaf.com/wp-content/uploads/2017/07/Latest-Programmers_guide_Edition-7-Version-6.pdf">PAF Programmers Guide</a>, longitude and lattitude, but not much information beyond that. Non-free and was apparently leaked by Wikileaks in 2009:
</p>
<p>
<a href="https://www.theguardian.com/technology/2009/sep/23/post-office-database-copyright-leak">Was the leak of Royal Mail's PostZon database a good or bad thing?</a>
</p>
<p>
<i>UK Postcodes to Longitudes Latitudes Table</i>
</p>
<p>
Provided by <a href="https://www.postcodeaddressfile.co.uk/order/postcodes_longitude_latitude_table/postcodes_longitude_latitude_single_user_licence.htm">postcodeaddressfile.co.uk</a> - a Royal Mail reseller. Appears to be a combination of PAF and OS data, has longitude and lattitude data but costs £199 for an <a href="https://www.postcodeaddressfile.co.uk/licences/grid_references/postcode_longitude_latitude_licence_options.htm">Organisation Licence</a>.
</p>
<p>
<b>Geospatial Index</b>
</p>
<p>
<a href="https://redislabs.com/redis-best-practices/indexing-patterns/geospatial/">Redis provides geospatial indexing</a> and a <a href="https://redis.io/commands#geo">bunch of related commands</a>, awesome - as long as you can provide it with longitude and lattitude data:
</p>
<p>
Ideal for answering the question "How many postcodes are within a given radius of a given postcode" is the <a href="https://redis.io/commands/georadiusbymember">GEORADIUSBYMEMBER</a> command.
</p>
<h4>Data Load</h4>
<p>
This bash script downloads the February 2021 release of National Statistics Postcode Lookup ZIP file, unzips the file we need, parses the data and formats into Redis commands which are piped to Redis.
</p>
<p>
The script uses the <a href="http://manpages.ubuntu.com/manpages/groovy/man1/csvtool.1.html">csvtool</a> command line utility which will need to be installed if you don't already have it.
</p>
<h4>load-nspl.sh</h4>
<pre class="brush:bash">
#!/bin/bash
# Data URL from: https://geoportal.statistics.gov.uk/datasets/national-statistics-postcode-lookup-february-2021
DATA_URL='https://www.arcgis.com/sharing/rest/content/items/7606baba633d4bbca3f2510ab78acf61/data'
ZIP_FILE='/tmp/nspl.zip'
CSV_FILE='/tmp/nspl.csv'
CSV_REGEX='NSPL.*UK\.csv'
REDIS_KEY='nspl' # NSPL - National Statistics Postcode Lookup
POSTCODE_FIELD=3 # PCDS - Unit postcode variable length version
LAT_FIELD=34 # LAT - Decimal degrees latitude
LONG_FIELD=35 # LONG - Decimal degrees longitude
START_TIME="$(date -u +%s)"
# Download data file if it doesn't exist
if [ -f "$ZIP_FILE" ]
then
echo "'$ZIP_FILE' exists, skipping download"
else
echo "Downloading '$ZIP_FILE'"
wget $DATA_URL -O $ZIP_FILE
fi
# Unzip data if it doesn't exist
if [ -f "$CSV_FILE" ]
then
echo "'$ZIP_FILE' exists, skipping unzipping"
else
echo "Unzipping data to '$CSV_FILE'"
unzip -p $ZIP_FILE $(unzip -Z1 $ZIP_FILE | grep -E $CSV_REGEX) > $CSV_FILE
fi
# Process data file, create Redis commands, pipe to redis-cli
echo "Processing data file '$CSV_FILE'"
csvtool format "GEOADD $REDIS_KEY %($LONG_FIELD) %($LAT_FIELD) \"%($POSTCODE_FIELD)\"\n" $CSV_FILE \
| redis-cli --pipe
# Done
END_TIME="$(date -u +%s)"
ELAPSED_TIME="$(($END_TIME-$START_TIME))"
MEMBERS=$(echo "zcard nspl" | redis-cli | cut -f 1)
echo "$MEMBERS postcodes loaded"
echo "Elapsed: $ELAPSED_TIME seconds"
</pre>
<p>
Expect output from the script similar to this:
</p>
<pre>
Downloading '/tmp/nspl.zip'
...
196050K ...... 100% 47.2M=54s
...
Unzipping data to '/tmp/nspl.csv'
Processing data file '/tmp/nspl.csv'
...
ERR invalid longitude,latitude pair 0.000000,99.999999
...
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 23258, replies: 2656252
2632994 postcodes loaded
Elapsed: 18 seconds
</pre>
</p>
<p>
Don't worry about the errors:
</p>
<pre>
ERR invalid longitude,latitude pair 0.000000,99.999999
</pre>
<p>
There are about 23,000 entries in the data file with invalid longitude and lattitude values which Redis will reject. The NSPL User Guide (available in the downloaded ZIP file - NSPL User Guide Feb 2021.pdf) has this to say about them:
</p>
<p>
<i>
"Decimal degrees latitude - The postcode coordinates in degrees latitude to six decimal places; 99.999999 for postcodes in the Channel Islands and the Isle of Man, and for postcodes with no grid reference."
</i>
</p>
<p>and</p>
<p>
<i>
"Decimal degrees longitude - The postcode coordinates in degrees longitude to six decimal places; 0.000000 for postcodes in the Channel Islands and the Isle of Man, and for postcodes with no grid reference."
</i>
</p>
<h4>Queries</h4>
<p>
Once we've got a full dataset loaded we can run some queries with <a href="https://redis.io/topics/rediscli">redis-cli</a>:
</p>
<pre class="brush:bash">
127.0.0.1:6379> geopos nspl "YO24 1AB"
1) 1) "-1.0930296778678894"
2) "53.95831391882791195"
127.0.0.1:6379> geopos nspl "YO1 7HH"
1) 1) "-1.0816839337348938"
2) "53.96135558421912037"
127.0.0.1:6379> geodist nspl "YO24 1AB" "YO1 7HH" km
"0.8159"
127.0.0.1:6379> georadiusbymember nspl "YO24 1AB" 100 m WITHDIST
1) 1) "YO24 1AY"
2) "29.0576"
2) 1) "YO1 6HT"
2) "2.0045"
3) 1) "YO2 2AY"
2) "2.0045"
4) 1) "YO24 1AB"
2) "0.0000"
5) 1) "YO24 1AA"
2) "69.7119"
127.0.0.1:6379> georadiusbymember nspl "YO1 7HH" 50 m WITHDIST
1) 1) "YO1 2HT"
2) "32.6545"
2) 1) "YO1 7HT"
2) "32.6545"
3) 1) "YO1 7HH"
2) "0.0000"
4) 1) "YO1 2HZ"
2) "40.3405"
5) 1) "YO1 2HL"
2) "37.6516"
6) 1) "YO1 7HL"
2) "38.9421"
</pre>
<h4>REST API</h4>
<p>
Here's a super basic Flask based REST service to query the geographic index. Postcode, distance and units can be provided as search parameters in the request URL. Postcodes within the requested radius are returned as JSON, along with their distance from the provided postcode.
</p>
<h4>nspl-rest.py</h4>
<pre class="brush:python">
from flask import Flask, jsonify
from redis import Redis
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB = 0
REDIS_KEY = 'nspl'
app = Flask(__name__)
r = Redis(host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB)
@app.route('/radius/<postcode>/<distance>/<unit>', methods=['GET'])
def radius(postcode, distance, unit):
try:
results = r.georadiusbymember(REDIS_KEY,
postcode, distance, unit,
withdist=True)
except Exception as e:
results = {}
return jsonify([{
'postcode': result[0],
'distance':result[1]
} for result in results])
app.run()
</pre>
<h4>API Example Usage</h4>
<pre class="brush:javascript">
$ curl localhost:5000/radius/YO24%201AB/100/m | json_pp
[
{
"distance" : 29.0576,
"postcode" : "YO24 1AY"
},
{
"distance" : 2.0045,
"postcode" : "YO1 6HT"
},
{
"distance" : 2.0045,
"postcode" : "YO2 2AY"
},
{
"distance" : 0,
"postcode" : "YO24 1AB"
},
{
"distance" : 69.7119,
"postcode" : "YO24 1AA"
}
]
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/nspl-radis-search">nspl-radis-search</a></li>
</ul>
</p>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-82218581585343825582020-10-03T18:45:00.006+01:002020-10-03T18:54:38.221+01:00Code-Point Open Postcode Distance AWS Lambda<p>
Redis supports calculating distances using longitude and latitude with <a href="https://redis.io/commands/geodist">GEODIST</a>, but I wanted to use <a href="https://en.wikipedia.org/wiki/Easting_and_northing">eastings and northings</a> to calculate distance between postcodes.
</p>
<p>
This project uses the <a href="https://www.ordnancesurvey.co.uk/business-government/products/code-point-open">Code-Point Open</a> dataset, loaded in to AWS ElastiCache (Redis) from an AWS S3 bucket, and provides an AWS Lambda REST API to query the distance between two given postcodes.
</p>
<p>
The <a href="https://osdatahub.os.uk/downloads/open/CodePointOpen">Code-Point Open dataset</a> is available as a free download from the <a href="https://osdatahub.os.uk/"></a>Ordnance Survey Data Hub</a>.
</p>
<h4>Dataset</h4>
<p>
CSV Zip Download - <a href="https://osdatahub.os.uk/downloads/open/CodePointOpen">Code-Point Open</a>
</p>
<h4>Source Code</h4>
<p>
Code available in GitHub - <a href="https://github.com/adrianwalker/codepoint-distance">codepoint-distance</a>
</p>
<h4>Build and Run</h4>
<p>
Build using Maven:
<br/>
<code>
mvn clean install
</code>
</p>
<p>
See the <a href="https://github.com/adrianwalker/codepoint-distance/blob/master/README.md">README.md</a> file on GitHub for AWS deployment instructions using the AWS Command Line Interface.
</p>
<h4>Example Usage</h4>
<p>The REST API takes two postcodes as URL parameters and returns the distance in meters, along with each postcode's eastings and northings.</p>
<p>Using curl from the Linux command line:</p>
<pre class="brush:bash">
curl -s https://77waizvyq3.execute-api.eu-west-2.amazonaws.com/Prod/codepoint/distance/YO241AB/YO17HH | json_pp
{
"distance" : 817.743235985477,
"toCodePoint" : {
"postcode" : "YO1 7HH",
"eastings" : 460350,
"northings" : 452085
},
"fromCodePoint" : {
"postcode" : "YO24 1AB",
"eastings" : 459610,
"northings" : 451737
}
}
</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-90142802054841514562019-11-16T18:52:00.000+00:002019-11-18T21:11:22.812+00:00Start Stop Continue<p>
Start Stop Continue is a virtual post-it note board for Start / Stop / Continue style retrospectives. It is implemented using Java, jQuery, and JSON files for persistence.
</p>
<p>
The project is designed for simplicity and the option for extension, rather than scalability. Even logging and error handling are secondary concerts at this point in the project.
</p>
<p>
An example instance of the site is hosted here: <a href="http://ststpcnt.com">ststpcnt.com</a>.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzRFvueAMFhwp9-8HHccgFFbvcdfOcOyxBqdqjqpZ5oB48KdnIAcR5Zl3AaTVK36GNfYLX_J2UBdnqdKzchFKqfLPXpL-rZ-eCR0hKEZpS1mfxuDjmLlgq_IsX-sqAuRY1S8YCkLqmdmg/s1600/ststpcnt1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzRFvueAMFhwp9-8HHccgFFbvcdfOcOyxBqdqjqpZ5oB48KdnIAcR5Zl3AaTVK36GNfYLX_J2UBdnqdKzchFKqfLPXpL-rZ-eCR0hKEZpS1mfxuDjmLlgq_IsX-sqAuRY1S8YCkLqmdmg/s320/ststpcnt1.png" width="320" height="165" data-original-width="1366" data-original-height="704" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1UL-SI3X1LbuJ2IiZD8cfDxODsMa880PmFPyFMiIUu_0sLtlszakV2A9ZTUeOFLieYmHjiee48t7QTz_VccOuqMPJ2n6D2243m2PJ7KX2LhDHY3302OgyJNRCTShGSqg86yk8DGI8ThM/s1600/ststpcnt2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1UL-SI3X1LbuJ2IiZD8cfDxODsMa880PmFPyFMiIUu_0sLtlszakV2A9ZTUeOFLieYmHjiee48t7QTz_VccOuqMPJ2n6D2243m2PJ7KX2LhDHY3302OgyJNRCTShGSqg86yk8DGI8ThM/s320/ststpcnt2.png" width="320" height="165" data-original-width="1366" data-original-height="704" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglmFsAlbmJAoEF1BP__KXcLczpZzU_VKJ2y3IT14vP_qrJAJ5RJz2OQP3qvnw8QcenN4HqPZhy7ot2OGWIbZ2ZTx6Xr77SI2vJxxP7BNt37n8i4TMcWUMA5gQp5tIwkuVrRq4ZiEFllqo/s1600/ststpcnt3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglmFsAlbmJAoEF1BP__KXcLczpZzU_VKJ2y3IT14vP_qrJAJ5RJz2OQP3qvnw8QcenN4HqPZhy7ot2OGWIbZ2ZTx6Xr77SI2vJxxP7BNt37n8i4TMcWUMA5gQp5tIwkuVrRq4ZiEFllqo/s320/ststpcnt3.png" width="182" height="320" data-original-width="305" data-original-height="537" /></a></div>
<h4>Source Code</h4>
<p>
Code available in GitHub - <a href="https://github.com/adrianwalker/start-stop-continue">start-stop-continue</a>
</p>
<h4>Setup</h4>
<p>
This project requires a minimum of <a href="https://www.oracle.com/technetwork/java/javase/downloads/jdk11-downloads-5066655.html">Java 11 JDK</a> to build.
</p>
<h4>Build and Run</h4>
<p>
Build using Maven:
<br/>
<code>
mvn clean install
</code>
</p>
<p>
Run by executing the built jar file:
<br/>
<code>
java -jar start-stop-continue-jar-with-dependencies.jar
</code>
</p>
<p>
Browse to:
<br/>
<code>
http://localhost:8080/startstopcontinue
</code>
</p>
<p>
A new post-it note board with a unique URL will be created and notes can be added, edited and deleted. If this project is deployed to a publicly available host, the URL can be shared with other retrospective participants.
</p>
<h4>Future Improvements</h4>
<p>
Possible future improvements may include:
<ul>
<li>Add logging and more robust error handling</li>
<li>Integrate with a scalable datastore such as Apache Cassandra</li>
<li>Integrate with a scalable caching solution such as Redis</li>
<li>Use websockets for add/edit/delete live updates without refreshing the page</li>
<li>Port to AWS or other cloud based hosting provider</li>
</ul>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-30138665568881667662019-07-27T15:35:00.001+01:002019-07-27T20:23:48.961+01:00Raspberry Pi 4 Official Case Temperature <p>
My Raspberry Pi 4, running without a case, has an idle temperature of 54°C. With the official Pi 4 case the idle temperature jumps to 72°C.
</p>
<p>
The official case is completely hotboxed, allowing for absolutely no airflow. Since the Pi 4 begins to throttle the CPU at 80°C, this makes the official case a design disaster and useless without the addition of active cooling.
</p>
<p>
The <a href="https://noctua.at/en/products/fan">noctua range of fans</a> get great <a href="https://www.amazon.co.uk/gp/product/B071W6JZV8#customerReviews">reviews</a> and are super well made – but you pay a premium for quality; they're pricey compared to other brands. I picked the 40mm x 20mm <a href="https://noctua.at/en/products/fan/nf-a4x20-5v">NF-A4x20 5v</a> for mounting on the outside of the Pi case.
</p>
<p>
If you wanted a slimmer fan to mount inside the case, go for the 40mm x 10mm <a href="https://noctua.at/en/products/fan/nf-a4x10-5v">NF-A4x10 5v</a>.
</p>
<h4>Case Modding</h4>
<div class="separator" style="clear: both; text-align: center;"></div>
<p>
I cut a 38mm hole in the top part of the case with a hole saw, at the end of the case away from where the Pi's USB and Ethernet ports are. Placing the fan over the hole, I marked out and drilled some screw holes for the screws provided with the fan.
</p>
<p>
In the side of the Pi case base, I've drilled 6, 2mm holes at 1cm intervals as an air inlet/exhaust.
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfRJIWvJsqeG0QadIYcp9MTrDXWq-rkVzFZfwmGKX0u8u0lklVIyVCmXBiDSqrGXw-JLawdVDbNSapSVhuM8IT8wUOh8PXsHUBCR2Uqq6mrN7qN7rFAjzfKCA6V1K3LVV7Bog3LP6O3Eg/s1600/1-hole-saw.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfRJIWvJsqeG0QadIYcp9MTrDXWq-rkVzFZfwmGKX0u8u0lklVIyVCmXBiDSqrGXw-JLawdVDbNSapSVhuM8IT8wUOh8PXsHUBCR2Uqq6mrN7qN7rFAjzfKCA6V1K3LVV7Bog3LP6O3Eg/s320/1-hole-saw.jpg" height="150" data-original-width="1465" data-original-height="921" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWNhj8V5VeEUvXZzwMpVAs2bJ_76UCBx6laUibu_sODBUVbJBPWPbjyWvEApcO-gRmw-_A5djhlmoX04d64Us3xzghBuA3HqAypJgiY8jZnCmSeSc4j1eBY-302gYP5zZtPwV_O1pPzYc/s1600/2-case-top.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWNhj8V5VeEUvXZzwMpVAs2bJ_76UCBx6laUibu_sODBUVbJBPWPbjyWvEApcO-gRmw-_A5djhlmoX04d64Us3xzghBuA3HqAypJgiY8jZnCmSeSc4j1eBY-302gYP5zZtPwV_O1pPzYc/s320/2-case-top.jpg" height="150" data-original-width="1404" data-original-height="1600" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLQnf7Er2mBfn59_kBO_-iz9-l5oEPEEgy4qGIJPyH1-rncHEImii8pYZF_HG6eyRC5gIxP5GgBs7GauKxeWgeT0IGUG4HDCKLEl-_xD_NVrdqOlAv0tkas9ijP-OMy3Z70ylJ7c54pSI/s1600/3-case-bottom.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLQnf7Er2mBfn59_kBO_-iz9-l5oEPEEgy4qGIJPyH1-rncHEImii8pYZF_HG6eyRC5gIxP5GgBs7GauKxeWgeT0IGUG4HDCKLEl-_xD_NVrdqOlAv0tkas9ijP-OMy3Z70ylJ7c54pSI/s320/3-case-bottom.jpg" height="150" data-original-width="1600" data-original-height="851" /></a>
</div>
<h4>Fan Connector Modding</h4>
<p>
The fan comes with a big fat 3 pin connector, too big to fit on the Pi's GPIO pins. The fan does come with a 2 pin adapter which you can add your own connectors to, but I chose not to use it as it would just take up space in the Pi case. Instead, I cut off the original connector, removed some of the wire insulation and crimped some new DuPont connectors.
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh06sJ4ehZ7QJqn9P3KqVALlcJB6buhPXtciWOkzoM8hDaQE8Ypzgo_lmfV5hXSpJBSj_dsg72Hp64Fz-78WWvnu2XeYqCLuERG7Uz98ssgnpsOR_-qH-nlZnPON7y_pS2yNSKqqQ-GDeg/s1600/4-dupont-connectors.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh06sJ4ehZ7QJqn9P3KqVALlcJB6buhPXtciWOkzoM8hDaQE8Ypzgo_lmfV5hXSpJBSj_dsg72Hp64Fz-78WWvnu2XeYqCLuERG7Uz98ssgnpsOR_-qH-nlZnPON7y_pS2yNSKqqQ-GDeg/s320/4-dupont-connectors.jpg" height="150" data-original-width="1600" data-original-height="624" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikYAhc0vu5BRUs_Q_2kIyWDFk686sAd9vAnMe8sSZaYbNOC3X-dA-6PrXIMeIgvoZ9ILuhF9z2MhqyOkdzKjtul5hPimXj5kYcsIFYWVylvF6WqjrxRpy2s0-LnrEMlo3E08jwekIiz8c/s1600/5-crimp.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikYAhc0vu5BRUs_Q_2kIyWDFk686sAd9vAnMe8sSZaYbNOC3X-dA-6PrXIMeIgvoZ9ILuhF9z2MhqyOkdzKjtul5hPimXj5kYcsIFYWVylvF6WqjrxRpy2s0-LnrEMlo3E08jwekIiz8c/s320/5-crimp.jpg" height="150" data-original-width="1080" data-original-height="1400" /></a>
</div>
<p>
The black wire connects to one of the Pi's ground pins. The red wire connects to one of the Pi's 5v pins. The yellow wire is not required - I crimped a connector anyway, but then just keep it out of the way with some tape.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWWUXJOyqzUl3t_Uf-jW8Fvfa2atvZEdMyNFV3h-Fy7gtMdkx-Gz5CxWJ6ELAdostLEkcO4xarH2Ql8VPg03W9la5A1I_N44uaeXch-4MknR4KL0eUUVBNR4uIMw4p-7dm1vTYYhWKCwo/s1600/gpio.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWWUXJOyqzUl3t_Uf-jW8Fvfa2atvZEdMyNFV3h-Fy7gtMdkx-Gz5CxWJ6ELAdostLEkcO4xarH2Ql8VPg03W9la5A1I_N44uaeXch-4MknR4KL0eUUVBNR4uIMw4p-7dm1vTYYhWKCwo/s400/gpio.png" width="400" height="119" data-original-width="1498" data-original-height="446" /></a></div>
<h4>Suck vs Blow</h4>
<p>
Should you mount the fan to blow cooler air on to the Pi board and vent the warmer air through the side holes, or use the side holes as an inlet for cooler air and suck the warmer air away from the Pi board?
</p>
<p>
The only way to really know is to mount the fan both ways, stress test the Pi, measure the temperature and compare the results. Install the stress package on the Pi using apt with command:
</p>
<pre>
sudo apt-get install stress
</pre>
<p>
For the tests below I have used the stress command with the cpu, io, vm and hdd parameters, with 4 workers for each, running for 5 minutes (300 seconds):
</p>
<pre>
stress -c 4 -i 4 -m 4 -d 4 -t 300
</pre>
<p>
The Pi's temperature can be measured with:
</p>
<pre>
vcgencmd measure_temp
</pre>
<p>
For the tests below, I sample the temperature every 5 seconds in a loop for 7 minutes (84 iterations) to record temperature rise and drop off:
</p>
<pre>
for i in {1..84}; do printf "`date "+%T"`\t`vcgencmd measure_temp | sed "s/[^0-9.]//g"`\n"; sleep 5; done
</pre>
<p>
<b>Test 1 – Blow</b>
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGA0ULf0a5DS5X5qdUz_K24RMD7WHT48Pvz5sT-KITw7E8EenGeuS0v-9a6fopcSJ5JlSXFBwCpawPxRc_FsalLLzoqUfJ2SosAPzS6SNd0vC8piFwW3EFk12EtYpK2tuhjQdFIh1ZT54/s1600/6-blow-fan.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGA0ULf0a5DS5X5qdUz_K24RMD7WHT48Pvz5sT-KITw7E8EenGeuS0v-9a6fopcSJ5JlSXFBwCpawPxRc_FsalLLzoqUfJ2SosAPzS6SNd0vC8piFwW3EFk12EtYpK2tuhjQdFIh1ZT54/s320/6-blow-fan.jpg" height="150" data-original-width="1529" data-original-height="1600" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIugmDAGihD-bbsx8eu2l36FObUcK5HhNCMugJSCsaRmOBukjUN8GTd_uwqUlF52vmy-kX7WG4i1NCuoSMVPA6m1pVZK7uAekpgqyCicquYvHZlfk1fa-A2dIoLOSvTFX5XADL7V0IuWE/s1600/7-blow-open.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIugmDAGihD-bbsx8eu2l36FObUcK5HhNCMugJSCsaRmOBukjUN8GTd_uwqUlF52vmy-kX7WG4i1NCuoSMVPA6m1pVZK7uAekpgqyCicquYvHZlfk1fa-A2dIoLOSvTFX5XADL7V0IuWE/s320/7-blow-open.jpg" height="150" data-original-width="1514" data-original-height="1600" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiImfsKE79oSCLDxyECq6f68URvt52c3ZX8FFyecevN0dkK_GuSSBCa7n_E-legj9WkPc1rQZurZdHspmaw6zylt5P1n5Ai7FGCFJWmbxI8TU3I0u49d2Cl73tOJt_8biIrcpB3zJdScAY/s1600/8-blow-closed.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiImfsKE79oSCLDxyECq6f68URvt52c3ZX8FFyecevN0dkK_GuSSBCa7n_E-legj9WkPc1rQZurZdHspmaw6zylt5P1n5Ai7FGCFJWmbxI8TU3I0u49d2Cl73tOJt_8biIrcpB3zJdScAY/s320/8-blow-closed.jpg" height="150" data-original-width="1596" data-original-height="1151" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghyT6wAfmy1xbkBQFcDdtPG5-wFNTuqjztzBcGYiF6VEzcYQDLV-Zr7wg8ejPAWcxrj0tyKylLmDFdj7OvTXLxXzRnwT8K157F7yJLQDDX-imUmryv2gejsCc_L6_JBBRdggFbdDAYfqw/s1600/9-blow-closed.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghyT6wAfmy1xbkBQFcDdtPG5-wFNTuqjztzBcGYiF6VEzcYQDLV-Zr7wg8ejPAWcxrj0tyKylLmDFdj7OvTXLxXzRnwT8K157F7yJLQDDX-imUmryv2gejsCc_L6_JBBRdggFbdDAYfqw/s320/9-blow-closed.jpg" height="150" data-original-width="1600" data-original-height="1090" /></a>
</div>
<p>
Mounting the fan with the sticker side down to blow air onto the board, connecting the power pins, closing the case and running the stress test gave the following results:
</p>
<pre>
$ stress -c 4 -i 4 -m 4 -d 4 -t 300
stress: info: [1074] dispatching hogs: 4 cpu, 4 io, 4 vm, 4 hdd
stress: info: [1074] successful run completed in 303s
</pre>
<pre>
$ for i in {1..84}; do printf "`date "+%T"`\t`vcgencmd measure_temp | sed "s/[^0-9.]//g"`\n"; sleep 5; done
10:59:42 38.0
10:59:47 37.0
10:59:52 43.0
10:59:57 45.0
11:00:02 47.0
11:00:07 48.0
11:00:12 48.0
11:00:17 49.0
11:00:22 49.0
11:00:27 50.0
11:00:32 50.0
11:00:37 51.0
11:00:42 51.0
11:00:48 52.0
11:00:53 52.0
11:00:58 51.0
11:01:03 53.0
11:01:08 52.0
11:01:13 52.0
11:01:18 53.0
11:01:23 53.0
11:01:28 53.0
11:01:34 53.0
11:01:42 52.0
11:01:48 53.0
11:01:55 52.0
11:02:00 54.0
11:02:05 54.0
11:02:10 54.0
11:02:15 53.0
11:02:20 53.0
11:02:25 53.0
11:02:30 53.0
11:02:35 54.0
11:02:41 54.0
11:02:46 54.0
11:02:51 53.0
11:02:56 52.0
11:03:01 54.0
11:03:06 53.0
11:03:11 54.0
11:03:16 53.0
11:03:21 54.0
11:03:26 54.0
11:03:31 54.0
11:03:36 54.0
11:03:41 54.0
11:03:46 54.0
11:03:51 54.0
11:03:56 54.0
11:04:01 53.0
11:04:06 54.0
11:04:11 53.0
11:04:16 54.0
11:04:21 53.0
11:04:26 54.0
11:04:31 53.0
11:04:37 54.0
11:04:42 53.0
11:04:47 54.0
11:04:52 49.0
11:04:57 46.0
11:05:02 45.0
11:05:07 44.0
11:05:12 46.0
11:05:17 43.0
11:05:22 42.0
11:05:27 42.0
11:05:32 41.0
11:05:37 40.0
11:05:42 41.0
11:05:47 40.0
11:05:52 40.0
11:05:57 41.0
11:06:02 39.0
11:06:07 40.0
11:06:12 39.0
11:06:17 39.0
11:06:22 38.0
11:06:27 38.0
11:06:32 38.0
11:06:37 38.0
11:06:42 39.0
11:06:47 38.0
</pre>
<p>
<b>Test 2 – Suck
</b>
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjJh5dvKKoKEUTlpG7z04p5fiJlqwD_35hOShTVB2Hd_1gImIKromoICLb-SJDanoAv3Rzqs_hHEi23US7HjBeN8iOTw4aMj3x3BaYG5jM_UrZ7wLig9C_UY6wENKvx2urAklww87HEwM/s1600/10-suck-fan.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjJh5dvKKoKEUTlpG7z04p5fiJlqwD_35hOShTVB2Hd_1gImIKromoICLb-SJDanoAv3Rzqs_hHEi23US7HjBeN8iOTw4aMj3x3BaYG5jM_UrZ7wLig9C_UY6wENKvx2urAklww87HEwM/s320/10-suck-fan.jpg" height="150" data-original-width="1600" data-original-height="1318" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicQ299WDt9krwH2LzxlFoopc1dT34QWYzGs_zVenRvACtn4hyni9-BeSk5T6RpeGrPDwj8ehRpAiKyUTRWp9OVhkafesCyhwqs81-XaUu28U12RgYCG-CABXQ5ECTPDMr6RBwKgREbOH8/s1600/11-suck-open.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicQ299WDt9krwH2LzxlFoopc1dT34QWYzGs_zVenRvACtn4hyni9-BeSk5T6RpeGrPDwj8ehRpAiKyUTRWp9OVhkafesCyhwqs81-XaUu28U12RgYCG-CABXQ5ECTPDMr6RBwKgREbOH8/s320/11-suck-open.jpg" height="150" data-original-width="1600" data-original-height="1136" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmn7AhMnmAxXwzwa7oRtQRVGy-7SWKzSdOSaSbKMi-O2IAPvTgOVoIdg4lQ-WjwkpNsm9uQ9kxCQRRkljffORFWbxlSu_oUBmMSuJDpR9_NkSqX44S85zZ7KCv7Cy1DOo4i02D9Lgbx-U/s1600/12-suck-closed.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmn7AhMnmAxXwzwa7oRtQRVGy-7SWKzSdOSaSbKMi-O2IAPvTgOVoIdg4lQ-WjwkpNsm9uQ9kxCQRRkljffORFWbxlSu_oUBmMSuJDpR9_NkSqX44S85zZ7KCv7Cy1DOo4i02D9Lgbx-U/s320/12-suck-closed.jpg" height="150" data-original-width="1600" data-original-height="1039" /></a>
</div>
<p>
Re-mounting the fan with the sticker side up to suck air away from the board, connecting the power pins, closing the case and running the stress test gave the following results:
</p>
<pre>
$ stress -c 4 -i 4 -m 4 -d 4 -t 300
stress: info: [1041] dispatching hogs: 4 cpu, 4 io, 4 vm, 4 hdd
stress: info: [1041] successful run completed in 302s
</pre>
<pre>
$ for i in {1..84}; do printf "`date "+%T"`\t`vcgencmd measure_temp | sed "s/[^0-9.]//g"`\n"; sleep 5; done
11:22:41 39.0
11:22:46 40.0
11:22:51 46.0
11:22:56 49.0
11:23:01 50.0
11:23:06 51.0
11:23:11 52.0
11:23:16 52.0
11:23:21 52.0
11:23:26 52.0
11:23:31 53.0
11:23:36 54.0
11:23:41 54.0
11:23:46 54.0
11:23:51 55.0
11:23:56 55.0
11:24:01 55.0
11:24:06 54.0
11:24:11 55.0
11:24:16 55.0
11:24:22 55.0
11:24:27 54.0
11:24:37 55.0
11:24:42 56.0
11:24:47 57.0
11:24:52 56.0
11:24:57 57.0
11:25:02 55.0
11:25:07 56.0
11:25:12 56.0
11:25:17 57.0
11:25:22 56.0
11:25:27 57.0
11:25:32 56.0
11:25:37 57.0
11:25:42 58.0
11:25:47 58.0
11:25:53 58.0
11:25:58 58.0
11:26:03 57.0
11:26:08 58.0
11:26:13 57.0
11:26:18 58.0
11:26:23 58.0
11:26:28 57.0
11:26:33 58.0
11:26:38 57.0
11:26:43 57.0
11:26:48 58.0
11:26:53 58.0
11:26:58 59.0
11:27:03 58.0
11:27:08 58.0
11:27:13 57.0
11:27:18 58.0
11:27:23 59.0
11:27:28 58.0
11:27:33 58.0
11:27:38 58.0
11:27:43 58.0
11:27:48 55.0
11:27:53 51.0
11:27:58 49.0
11:28:03 48.0
11:28:09 47.0
11:28:14 46.0
11:28:19 46.0
11:28:24 46.0
11:28:29 45.0
11:28:34 45.0
11:28:39 44.0
11:28:44 44.0
11:28:49 43.0
11:28:54 44.0
11:28:59 44.0
11:29:04 42.0
11:29:09 42.0
11:29:14 42.0
11:29:19 42.0
11:29:24 43.0
11:29:29 43.0
11:29:34 42.0
11:29:39 42.0
11:29:44 42.0
</pre>
<h4>Comparison</h4>
<p>
Blowing air keeps the Pi cooler than sucking air, with temperature ranges of 37°C-54°C and 39°C-59°C respectively for this fan/vent combination.
</p>
<p>
When sucking air, the Pi doesn't reach the original idle temperature 2 minutes after the stress test has ended.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghkGWmp2FtA7kN8inaDxTZ8O3XOJVbdS54bF4iFPFvYU17h0Dlh84dzzDFJS3fTUsxQRb3bW-DJP0FRlR4F9GTdcCg1G-V8H2uDqLstAuIED7FSuwss_0nsKtLyUFBcmPWrVP2jCAyXHc/s1600/13-blow-vs-suck-temp.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghkGWmp2FtA7kN8inaDxTZ8O3XOJVbdS54bF4iFPFvYU17h0Dlh84dzzDFJS3fTUsxQRb3bW-DJP0FRlR4F9GTdcCg1G-V8H2uDqLstAuIED7FSuwss_0nsKtLyUFBcmPWrVP2jCAyXHc/s1600/13-blow-vs-suck-temp.png" data-original-width="728" data-original-height="428" /></a></div>
<h4>Parts list and prices</h4>
<p>
<table>
<tr>
<th>Part</th>
<th>Price</th>
<th>Link</th>
</tr>
<tr>
<td>38mm Hole Saw</td>
<td>£4.59</td>
<td><a href="https://www.ebay.co.uk/itm/143196534863">https://www.ebay.co.uk/itm/143196534863</a></td>
</tr>
<tr>
<td>DuPont Connectors</td>
<td>£2.60</td>
<td><a href="https://www.ebay.co.uk/itm/264250195674">https://www.ebay.co.uk/itm/264250195674</a></td>
</tr>
<tr>
<td>Noctua NF-A4x20 5V</td>
<td>£13.40</td>
<td><a href="https://www.amazon.co.uk/gp/product/B071W6JZV8">https://www.amazon.co.uk/gp/product/B071W6JZV8</a></td>
</tr>
</table>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-56654296750724622542019-07-06T13:47:00.001+01:002019-07-07T15:17:52.381+01:00Raspberry Pi Backup Server<h4>Getting Old</h4>
<p>
Recently I've found myself lying awake at night worrying if my documents, code and photos are backed up and recoverable. Or to put it another way - I've officially become old :-(
</p>
<p>
With a new Raspberry Pi 4B on order it's time to re-purpose the old Raspberry Pi 3B to create a backup solution.
</p>
<h4>Hardware</h4>
<p>
I want my backup solution and backup media to be small, cheap and redundant. Speed isn't really an issue, so I've chosen micro SD as my backup media for this project.
</p>
<p>
I've picked up an Anker 4-Port USB hub, 2 SanDisk 64 GB micro SD cards and 2 SanDisk MobileMate micro SD card readers. I ordered this kit from Amazon and the prices at the time of writing were:
</p>
<table>
<tr>
<th>Component</th><th>Price</th>
</tr>
<tr>
<td><a href="https://www.amazon.co.uk/gp/product/B00Y25XFGK">Anker 4-Port USB 3.0 Ultra Slim Data Hub</a></td>
<td>£10.99</td>
</tr>
<tr>
<td><a href="https://www.amazon.co.uk/gp/product/B073JYVKNX">SanDisk Ultra 64 GB microSDXC</a></td>
<td>£11.73</td>
</tr>
<tr>
<td><a href="https://www.amazon.co.uk/gp/product/B07G5JV2B5">SanDisk MobileMate USB 3.0 Reader</a></td>
<td>£7.50</td>
</tr>
</table>
<p>
They fit together really well, with room for two more SD cards and readers if I need to expand:
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLEuRIUKoTlXWhF9pMplyMGT-XIxY_qx4892-ifuCNR0tJRvpHTtl98ISGRUfkBbemQf1bQHskdXfxSHHaQJ3WK9fmlyO6dYqm-WCSpRn964pSGQtZiLLY6365l2MhlZ0cW1gzI0S8k0g/s1600/backup-server-components.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;">
<img border="0" data-original-height="1279" data-original-width="1600" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLEuRIUKoTlXWhF9pMplyMGT-XIxY_qx4892-ifuCNR0tJRvpHTtl98ISGRUfkBbemQf1bQHskdXfxSHHaQJ3WK9fmlyO6dYqm-WCSpRn964pSGQtZiLLY6365l2MhlZ0cW1gzI0S8k0g/s320/backup-server-components.jpg" width="320" />
</a>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTUizbZEB2vWBIA61Y1eI07M1F6F8Tcmpm0MCK6_t16EQjRuFfhbxfGHV1W42gSltzmOVPSN-3xppyk5OJEA3yotIrgV7Y6yQLt2EfUSFYR82KAJjN7fvHHzQNwkJmRtt6X42e6ehczs0/s1600/backup-server-assembled.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;">
<img border="0" data-original-height="1372" data-original-width="1600" height="274" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTUizbZEB2vWBIA61Y1eI07M1F6F8Tcmpm0MCK6_t16EQjRuFfhbxfGHV1W42gSltzmOVPSN-3xppyk5OJEA3yotIrgV7Y6yQLt2EfUSFYR82KAJjN7fvHHzQNwkJmRtt6X42e6ehczs0/s320/backup-server-assembled.jpg" width="320" />
</a>
</div>
<p>
The plan is to make one of the SD cards available over the network as a share, via the Pi using <a href="https://www.samba.org/">SAMBA</a>.
The share can be mapped as a Windows network drive and files can easily be dragged and dropped for backup. In case the first backup SD
card fails, the Pi will copy the files and folders from the first SD card to the second SD card using
<a href="https://en.wikipedia.org/wiki/Rsync">rsync</a> to create a backup of the backup.
</p>
<h4>
Software
</h4>
<p>
Download and upgrade the Pi 3B to the lastest version of <a href="https://www.raspberrypi.org/downloads/raspbian/">Raspbian</a>.
I've chosen Rapbian Lite to save a bit of space on the Pi's SD card:
</p>
<p>
<a href="https://downloads.raspberrypi.org/raspbian_lite_latest">https://downloads.raspberrypi.org/raspbian_lite_latest</a>
</p>
<p>
At the time of writing the lastest download was:
<code>
2019-06-20-raspbian-buster-lite.zip
</code>
</p>
<p>
Write the OS to the Pi's SD card using <a href="https://www.balena.io/etcher/">Etcher</a>. Top tip - Etcher can write a <i>.zip</i> file,
but it's much quicker to extract the <i>.iso</i> file from the <i>.zip</i> file and write that instead.
</p>
<p>
Don't forget to add an empty <i>ssh</i> file to the boot partition on the Pi's SD card if you are going to run the Pi headless.
</p>
<p>
Put the Pi's SD card into the Pi, attached the USB hub and micro SD cards, and boot the Pi and login via SSH. Update and upgrade
any new packages first, enable unattended security updates and install your editor of choice:
</p>
<pre>
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install unattended-upgrades
$ sudo apt-get install vim
</pre>
<p>
Because I've got a Pi 4 on the way, I want to call this Pi 'raspberrypi3'. Modify the <i>/etc/hostname</i> and <i>/etc/hosts</i> files:
</p>
<pre>
$ sudo vim /etc/hostname
raspberrypi3
</pre>
<pre>
$ sudo vim /etc/hosts
127.0.1.1 raspberrypi3
</pre>
<pre>
$ sudo reboot
</pre>
<p>
At this point, the backup SD cards should be available to Linux as devices <i>/dev/sda</i> and <i>/dev/sdb</i>.
</p>
<p>
I want the backup SD cards to be readable on Linux and Windows machines using the <a href="https://en.wikipedia.org/wiki/ExFAT">exFAT</a>
file system. A good tutorial on how to do this on Linux using <a href="https://en.wikipedia.org/wiki/Filesystem_in_Userspace">FUSE</a>
and gdisk is available here:
</p>
<p>
<a href="https://matthew.komputerwiz.net/2015/12/13/formatting-universal-drive.html">https://matthew.komputerwiz.net/2015/12/13/formatting-universal-drive.html</a>
</p>
<pre>
$ sudo apt-get install exfat-fuse exfat-utils
$ sudo apt-get install gdisk
</pre>
<p>
Use gdisk to remove any existing partitions, create a new partition and write this to the SD cards. Make sure to create the new partition
as type <i>0700</i> (Microsoft basic data) when prompted:
</p>
<pre>
$ sudo gdisk /dev/sda
GPT fdisk (gdisk) version 0.8.8
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries.
Command (? for help):
</pre>
<pre>
Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): Y
</pre>
<pre>
Command (? for help): n
Partition number (1-128, default 1):
First sector (34-16326462, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-16326462, default = 16326462) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 0700
Changed type of partition to 'Microsoft basic data'
</pre>
<pre>
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
</pre>
<p>
Repeat for the second SD card:
</p>
<pre>
$ sudo gdisk /dev/sdb
</pre>
<p>
Create exFAT partitions on both SD cards and label the partitions <i>PRIMARY</i> and <i>SECONDARY</i>:
</p>
<pre>
$ sudo mkfs.exfat /dev/sda1
$ sudo exfatlabel /dev/sda1 PRIMARY
</pre>
<pre>
$ sudo mkfs.exfat /dev/sdb1
$ sudo exfatlabel /dev/sdb1 SECONDARY
</pre>
<p>
Create directories to mount the new partitions on:
</p>
<pre>
$ sudo mkdir -p /media/usb/backup/primary
$ sudo mkdir -p /media/usb/backup/secondary
</pre>
<p>
Modify <i>/etc/fstab</i> to mount the SD cards by partition label. This allows us to mount the correct card regardless of it's device path
or UUID:
</p>
<pre>
$ sudo vim /etc/fstab
LABEL=PRIMARY /media/usb/backup/primary exfat defaults 0 0
LABEL=SECONDARY /media/usb/backup/secondary exfat defaults 0 0
</pre>
<p>
Mount the SD cards:
</p>
<pre>
$ sudo mount /media/usb/backup/primary
$ sudo mount /media/usb/backup/secondary
</pre>
<p>
Create a cron job to rsync files from the primary card to the secondary card. The following entry syncs the files every day at 4am:
</p>
<pre>
$ sudo crontab -e
0 4 * * * rsync -av --delete /media/usb/backup/primary/ /media/usb/backup/secondary/
</pre>
<p>
To sync files immediately, rsync can be run from the command line at any time with:
</p>
<pre>
$ sudo rsync -av --delete /media/usb/backup/primary/ /media/usb/backup/secondary/
</pre>
<p>
To make the primary SD card available as a Windows share, install and configure SAMBA:
</p>
<pre>
$ sudo apt-get install samba samba-common-bin
$ sudo vim /etc/samba/smb.conf
[backup]
comment = Pi backup share
path = /media/usb/backup/primary
public = yes
browseable = yes
writable = yes
create mask = 0777
directory mask = 0777
$ sudo service smbd restart
</pre>
<p>
Finally, install and configure UFW firewall, allowing incoming connections for SSH and SAMBA only:
</p>
<pre>
$ sudo apt-get install ufw
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing
$ sudo ufw allow ssh
$ sudo ufw allow samba
$ sudo ufw enable
</pre>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-14247191057002154682019-03-09T13:46:00.001+00:002019-03-10T13:03:12.161+00:00Card Table<p>
Card Table is a multi-player web based virtual card table implemented using Java, plain JavaScript, WebSockets and Postgres.
</p>
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='640' height='360' src='https://www.blogger.com/video.g?token=AD6v5dwzLtiTVPWyK6vaZ2-XIl4ustigBZ5aZCrR9OKUcQVCCI_uWoo2eLB_GGj1ZXGparv993Xc-qw_f3ikKgJz1w' class='b-hbp-video b-uploaded' frameborder='0'></iframe>
</div>
<h4>Source Code</h4>
<p>
Code available in GitHub - <a href="https://github.com/adrianwalker/card-table">card-table</a>
</p>
<h4>Setup</h4>
<p>
This project requires a minimum of <a href="https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html">Java 8 JDK</a> to build and a <a href="https://www.postgresql.org/download/">Postgres</a> installation.
</p>
<p>
A drop/create Postgres SQL script needs to be run to create and initalise the database with default data:
<br />
<a href="https://github.com/adrianwalker/card-table/blob/master/src/main/resources/sql/drop-create-tables.sql">src/main/resources/sql/drop-create-tables.sql</a>
</p>
<p>
Configure the Java web application's database dev configuration:
<br />
<a href="https://github.com/adrianwalker/card-table/blob/master/src/main/resources/config/dev.properties">src/main/resources/config/dev.properties</a>
<p/>
<h4>Build and Run</h4>
<p>
Build and run using Maven with an embedded Tomcat:
<br />
<br />
<code>
mvn clean install tomcat7:run-war
</code>
<br />
<br />
Browse to:
<br />
<br />
<code>
http://localhost:8080/cardtable
</code>
<br />
<br />
A new card table will be created with a unique URL. If this project is deployed to a publicly available host, the URL can be shared with other players to play against.
</p>
<h4>Mouse Controls</h4>
<p>
Packs of cards can be dragged from the side bar and dropped on the table to create a new deck. Currently there are 2 decks - both standard 52 card decks, one with a black back and one with a red back.
</p>
<p>
Single cards can be clicked and dragged to move them around the table. Multiple cards can be selected by clicking and dragging the mouse and drawing a selection box around the cards to be selected. Selected cards can be clicked and dragged to move more than one card.
</p>
<p>
Clicking a single card will turn the card face up/face down. Clicking multiple selected cards will shuffle the selected cards.
</p>
<p>
Moving cards to the bottom of the table, below the green line, hides them from other players. Any card actions which take place here, e.g. moving, turning and shuffling will not be broadcast to other players.
</p>
<p>
Dragging single or multiple cards off the screen removes them from the table.
</p>
<p>
See the video above for examples of all these actions.
</p>
<h4>Supported Browsers</h4>
<p>
Currently only desktop browsers are supported due to the lack of native drag-and-drop JavaScript support on mobile devices. At the time of writing, Card Table has been tested on Chrome 72, Firefox 65, Edge 42, IE 11 and Opera 58.
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-42400387150022689282018-08-29T22:57:00.001+01:002018-08-29T23:06:23.315+01:00Java 9/10 Multiline String<p>
My <a href="http://www.adrianwalker.org/2011/12/java-multiline-string.html">Java Multiline String</a> project stopped building when compiling with Java 10 because <code>tools.jar</code> has been removed since Java 9.
</p>
<p>
When the <code>tools.jar</code> dependency is specified like this:
</p>
<p>
pom.xml
</p>
<pre class="brush:xml">
...
<dependencies>
<dependency>
<groupId>sun.jdk</groupId>
<artifactId>tools</artifactId>
<version>LATEST</version>
<scope>system</scope>
<systemPath>${java.home}/../lib/tools.jar</systemPath>
</dependency>
</dependencies>
...
</pre>
<p>
The build failed with output:
</p>
<pre>
------------------------------------------------------------------------
BUILD FAILURE
------------------------------------------------------------------------
Total time: 0.347 s
Finished at: 2018-08-29T21:10:41+01:00
Final Memory: 6M/24M
------------------------------------------------------------------------
Failed to execute goal on project multiline-string: Could not resolve dependencies for project org.adrianwalker:multiline-string:jar:0.2.1: Could not find artifact sun.jdk:tools:jar:LATEST at specified path /usr/local/jdk-10.0.1/../lib/tools.jar -> [Help 1]
</pre>
<p>
Simply removing the dependency fixes the build and the project compiles without error. So where are the classes which were in the <code>tools.jar</code> <code>com.sun.tools.javac</code> packages?
</p>
<p>
In JDK versions 1.8 and lower:
</p>
<pre>
cd /usr/local/jdk1.8.0_172
unzip -l ./lib/tools.jar | grep com/sun/tools/javac/tree/TreeMaker.class
47366 2018-03-28 21:40 com/sun/tools/javac/tree/TreeMaker.class
</pre>
<p>
In JDK version 10:
</p>
<pre>
cd /usr/local/jdk-10.0.1
unzip -l ./jmods/jdk.compiler.jmod | grep com/sun/tools/javac/tree/TreeMaker.class
warning [./jmods/jdk.compiler.jmod]: 4 extra bytes at beginning or within zipfile
(attempting to process anyway)
64266 2018-03-26 18:16 classes/com/sun/tools/javac/tree/TreeMaker.class
</pre>
<p>
I still want to be able to compile this library will all JDK versions from 1.6 onwards without creating another project for versions 9 and 10. To do this we can move the <code>tools.jar</code> dependency to a profile which is only activated for older JDKs:
</p>
<p>
pom.xml
</p>
<pre class="brush:xml">
...
<profiles>
<profile>
<activation>
<jdk>[1.6,9)</jdk>
</activation>
<dependencies>
<dependency>
<groupId>sun.jdk</groupId>
<artifactId>tools</artifactId>
<version>LATEST</version>
<scope>system</scope>
<systemPath>${java.home}/../lib/tools.jar</systemPath>
</dependency>
</dependencies>
</profile>
</profiles>
...
</pre>
<p>
The line <code><jdk>[1.6,9)</jdk></code> specifies a version range using the <a href="https://maven.apache.org/enforcer/enforcer-rules/versionRanges.html">Apache Maven Enforcer range syntax</a>. In this case, include all versions from 1.6 upto but not including 9.
</p>
<p>
Aside from pom.xml changes, the Java code and usage remains identical to the <a href="http://www.adrianwalker.org/2011/12/java-multiline-string.html">original project</a>.
</p>
<h4>Java 9/10 module system</h4>
<p>
This all only works because the <code>maven-compiler-plugin</code> is configured with <code>source</code> and <code>target</code> set to <code>1.6</code>:
</p>
<p>
pom.xml
</p>
<pre class="brush:xml">
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
...
</pre>
<p>
If we want to use Java 9/10 lanuage features, setting <code>source</code> and <code>target</code> to <code>10</code> will give these errors:
</p>
<pre>
Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project multiline-string: Compilation failure: Compilation failure:
org/adrianwalker/multilinestring/MultilineProcessor.java:[3,27] package com.sun.tools.javac.model is not visible
(package com.sun.tools.javac.model is declared in module jdk.compiler, which does not export it to the unnamed module)
org/adrianwalker/multilinestring/MultilineProcessor.java:[4,27] package com.sun.tools.javac.processing is not visible
(package com.sun.tools.javac.processing is declared in module jdk.compiler, which does not export it)
org/adrianwalker/multilinestring/MultilineProcessor.java:[5,27] package com.sun.tools.javac.tree is not visible
(package com.sun.tools.javac.tree is declared in module jdk.compiler, which does not export it to the unnamed module)
org/adrianwalker/multilinestring/MultilineProcessor.java:[6,27] package com.sun.tools.javac.tree is not visible
(package com.sun.tools.javac.tree is declared in module jdk.compiler, which does not export it to the unnamed module)
</pre>
<p>
In this case we must correctly use the new Java <a href="http://openjdk.java.net/projects/jigsaw/">Module System</a>. To resolve the above errors first we need a <code>module-info.java</code> in the project root specifying a module name and the module's requirements:
</p>
<p>
module-info.java
</p>
<pre class="brush:java">
module org.adrianwalker.multilinestring {
requires jdk.compiler;
}
</pre>
<p>
Next we need to export the required packages in the <code>jdk.compiler</code> module and make them visible to our <code>org.adrianwalker.multilinestring</code> module:
</p>
<p>
pom.xml
</p>
<pre class="brush:xml">
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<source>10</source>
<target>10</target>
<compilerArgs>
<arg>--add-exports</arg>
<arg>jdk.compiler/com.sun.tools.javac.model=org.adrianwalker.multilinestring</arg>
<arg>--add-exports</arg>
<arg>jdk.compiler/com.sun.tools.javac.processing=org.adrianwalker.multilinestring</arg>
<arg>--add-exports</arg>
<arg>jdk.compiler/com.sun.tools.javac.tree=org.adrianwalker.multilinestring</arg>
</compilerArgs>
</configuration>
</plugin>
...
</pre>
<p>
And now the project should build without errors and work just as before.
</p>
<h4>Source Code</h4>
<p>
<ul>
<li>Github - <a href="https://github.com/adrianwalker/multiline-string">multiline-string</a></li>
<li>Github - <a href="https://github.com/adrianwalker/multiline-string/tree/Java-10">multiline-string</a> (Java 10 branch)</li>
</ul>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-51501544827895389092018-08-27T18:56:00.001+01:002018-08-27T21:49:33.745+01:00Enforcing Multi-Tier Architecture<p>
So you've designed an application, using the principals of <a href="https://en.wikipedia.org/wiki/Separation_of_concerns">separation of concerns</a> and a <a href="https://en.wikipedia.org/wiki/Multitier_architecture">multi-tier architecture</a>. It's a delight to navigate and maintain the code base, the architecture might look something like this:
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwfAV7pOSQZ6IpzGQRTl4PLrGtodZ2xyPC89yyM7pKCzgrYTt2kqJFNNz40-eogYUnWIoftl5A9TU3zMHQredKBS-Dmw4FCrZRj28sv3MyO6OZa1SR_zexBcpwwDpW4sfcSIUcO3a_EOI/s1600/multitier.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwfAV7pOSQZ6IpzGQRTl4PLrGtodZ2xyPC89yyM7pKCzgrYTt2kqJFNNz40-eogYUnWIoftl5A9TU3zMHQredKBS-Dmw4FCrZRj28sv3MyO6OZa1SR_zexBcpwwDpW4sfcSIUcO3a_EOI/s400/multitier.png" width="400" height="341" data-original-width="640" data-original-height="545" /></a>
</div>
<p>
The <a href="https://en.wikipedia.org/wiki/Presentation_layer">presentation layer</a> talks to the <a href="https://en.wikipedia.org/wiki/Application_layer">application layer</a>, which talks to the <a href="https://en.wikipedia.org/wiki/Data_access_layer">data access layer</a>. The <a href="https://en.wikipedia.org/wiki/Facade_pattern">facade object</a> provides a high-level interface for API consumers, talking to the <a href="https://en.wikipedia.org/wiki/Service_layer_pattern">service objects</a>, which call objects encapsulating <a href="https://en.wikipedia.org/wiki/Business_object">business logic</a>, which operate on data provided by the <a href="https://en.wikipedia.org/wiki/Data_access_object">data access objects</a>. Life is good.
</p>
<p>
Eventually other programmers will have to maintain and add new features to your application, possibly in your absence. How do you communicate your design intentions to future maintainers? The above diagram, a bit of documentation, and some programming rigour should suffice. Back in the real world, programmers face time pressures which prevent them creating and updating documentation, and managers and customers don't care about code maintainability - they want their features yesterday. When getting the code into production as fast as possible is the only focus, clean code and architecture are soon forgotten.
</p>
<p>
To quote <a href="https://mdjnewman.me/2013/10/john-carmack-on-type-systems/">John Carmack</a>:
</p>
<p>
"It’s just amazing how many mistakes and how bad programmers can be. Everything that is syntactically legal, that the compiler will accept, will eventually wind up in your code base."
</p>
<p>
Carmack was talking about the usefulness of static typing here, but the same problem also applies to code architecture: over time, whatever can happen, will happen. Your well designed architecture will risk turning into <a href="https://en.wikipedia.org/wiki/Spaghetti_code">spaghetti code</a>, with objects calling methods from any layer:
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1qIo0xS9DJlZ5J4YpBdf_QryeYwfu_VZnJaOqhRwZyjuU8t7dQZ0sgXCJOtPs5Jcrf1o-FJYclJx-qrTUfSb7djOgas24nu-nMDz0jzVgEYaLQnkRjCm2-KKx1xgDZ6yFvCL7cNdF3oU/s1600/spaghetti.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1qIo0xS9DJlZ5J4YpBdf_QryeYwfu_VZnJaOqhRwZyjuU8t7dQZ0sgXCJOtPs5Jcrf1o-FJYclJx-qrTUfSb7djOgas24nu-nMDz0jzVgEYaLQnkRjCm2-KKx1xgDZ6yFvCL7cNdF3oU/s400/spaghetti.png" width="400" height="341" data-original-width="640" data-original-height="545" /></a>
</div>
<p>
To address this problem I think it would be useful to have a way of documenting and enforcing which objects can invoke a method on another object. In Java this can be achieved with a couple of annotations and some aspect oriented programming. Below is an annotation named <code>CallableFrom</code> which can be used to annotate methods on a class indicating what classes and interface implementations the method can be called from.
</p>
<p>CallableFrom.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import org.adrianwalker.callablefrom.test.TestCaller;
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface CallableFrom {
CallableFromClass[] value() default {
@CallableFromClass(TestCaller.class)
};
}
</pre>
<p>
The annotation's <code>value</code> method returns an array of another annotation <code>CallableFromClass</code>:
</p>
<p>CallableFromClass.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Target(ElementType.ANNOTATION_TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface CallableFromClass {
Class value();
boolean subclasses() default true;
}
</pre>
<p>
The annotation's <code>value</code> method returns a <code>Class</code> object - the class (or interface) of an object which is allowed to call the annotated method. The annotation's <code>subclasses</code> method returns a <code>boolean</code> value which flags if subclasses (or interface implementations) are allowed to call the annotated method.
</p>
<p>
At this point the annotations do nothing, we need a way of enforcing the behaviour specified by the annotations. This can be achieved using an <a href="https://www.eclipse.org/aspectj/">AspectJ</a> aspect class:
</p>
<p>CallableFromAspect.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
@Aspect
public final class CallableFromAspect {
@Before("@annotation(callableFrom) && call(* *.*(..))")
public void before(final JoinPoint joinPoint, final CallableFrom callableFrom) throws CallableFromError {
Class callingClass = joinPoint.getThis().getClass();
boolean isCallable = isCallable(callableFrom, callingClass);
if (!isCallable) {
Class targetClass = joinPoint.getTarget().getClass();
throw new CallableFromError(targetClass, callingClass);
}
}
private boolean isCallable(final CallableFrom callableFrom, final Class callingClass) {
boolean callable = false;
CallableFromClass[] callableFromClasses = callableFrom.value();
for (CallableFromClass callableFromClass : callableFromClasses) {
Class clazz = callableFromClass.value();
boolean subclasses = callableFromClass.subclasses();
callable = (subclasses && clazz.isAssignableFrom(callingClass))
|| (!subclasses && clazz.equals(callingClass));
if (callable) {
break;
}
}
return callable;
}
}
</pre>
<p>
The aspect intercepts any calls to methods annotated with <code>@CallableFrom</code>, gets the calling object's class and compares it to the class objects specified by the <code>@CallableFromClass</code>'s class values. If <code>subclasses</code> is <code>true</code> (the default), the calling class can be a subclass (or implementation) of the class object specified by <code>@CallableFromClass</code>. If <code>subclasses</code> is <code>false</code> the calling class must be equal to the class object specified by <code>@CallableFromClass</code>.
</p>
<p>
If the above conditions are not met, for any of the <code>@CallableFromClass</code> annotations, the method is not callable from the calling class and a <code>CallableFromError</code> error is thrown. <code>CallableFromError</code> extends <code>Error</code> rather than <code>Exception</code> as it is not expected that application code should ever to attempt to catch it.
</p>
<p>CallableFromError.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom;
public final class CallableFromError extends Error {
private static final String EXCEPTION_MESSAGE = "%s is not callable from %s";
public CallableFromError(final Class targetClass, final Class callingClass) {
super(String.format(EXCEPTION_MESSAGE,
targetClass.getCanonicalName(),
callingClass.getCanonicalName()));
}
}
</pre>
<p>
For example, if you have a class named <code>Callable</code> and you only want to be able to call it from another class named <code>CallableCaller</code>, no subclasses:
</p>
<p>Callable.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom;
public final class Callable {
@CallableFrom({
@CallableFromClass(value=CallableCaller.class, subclasses=false)
})
public void doStuff() {
System.out.println("Callable doing stuff");
}
}
</pre>
<p>
Another example, if you had some business logic encapsulated in an object which should only be called by a service object and test classes:
</p>
<p>UpperCaseBusinessObject.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom.example.application;
import org.adrianwalker.callablefrom.CallableFrom;
import org.adrianwalker.callablefrom.CallableFromClass;
import org.adrianwalker.callablefrom.test.TestCaller;
public final class UpperCaseBusinessObject implements ApplicationLayer {
@CallableFrom({
@CallableFromClass(value = MessageService.class, subclasses = false),
@CallableFromClass(value = TestCaller.class, subclasses = true)
})
public String uppercaseMessage(final String message) {
if (null == message) {
return null;
}
return message.toUpperCase();
}
}
</pre>
<h4>Testing</h4>
<p>
To make classes callable from JUnit tests, the unit test class should implement the <code>TestCaller</code> interface. This interface is the default value for the <code>CallableFrom</code> annotation:
</p>
<p>CallableFromTest.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom;
import org.adrianwalker.callablefrom.test.TestCaller;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
import org.junit.Test;
public final class CallableFromTest implements TestCaller {
@Test
public void testCallableFromTestCaller() {
CallableCaller cc = new CallableCaller(new Callable());
cc.doStuff();
}
@Test
public void testCallableFromError() {
ErrorCaller er = new ErrorCaller(new CallableCaller(new Callable()));
try {
er.doStuff();
fail("Expected CallableFromError to be thrown");
} catch (final CallableFromError cfe) {
String expectedMessage
= "org.adrianwalker.callablefrom.Callable "
+ "is not callable from "
+ "org.adrianwalker.callablefrom.ErrorCaller";
String actualMessage = cfe.getMessage();
assertEquals(expectedMessage, actualMessage);
}
}
@Test
public void testNotCallableFromSubclass() {
CallableCallerSubclass ccs = new CallableCallerSubclass(new Callable());
try {
ccs.doStuff();
fail("Expected CallableFromError to be thrown");
} catch (final CallableFromError cfe) {
String expectedMessage
= "org.adrianwalker.callablefrom.Callable "
+ "is not callable from "
+ "org.adrianwalker.callablefrom.CallableCallerSubclass";
String actualMessage = cfe.getMessage();
assertEquals(expectedMessage, actualMessage);
}
}
}
</pre>
<p>
Where <code>CallableCaller</code> can be called from implementations of <code>TestCaller</code>:
</p>
<p>CallableCaller.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom;
import org.adrianwalker.callablefrom.test.TestCaller;
public class CallableCaller {
private final Callable callable;
public CallableCaller(final Callable callable) {
this.callable = callable;
}
@CallableFrom({
@CallableFromClass(value=ErrorCaller.class, subclasses = false),
@CallableFromClass(value=TestCaller.class, subclasses = true)
})
public void doStuff() {
System.out.println("CallableCaller doing stuff");
callable.doStuff(); // callable from here
}
}
</pre>
<h4>Usage</h4>
<p>
Using the <code>callable-from</code> library in a project requires the aspect to be weaved into your code at build time. Using <a href="https://maven.apache.org/">Apache Maven</a>, this means using the <a href="https://www.mojohaus.org/aspectj-maven-plugin/">AspectJ plugin</a> and specifying <code>callable-from</code> as a weave dependency:
</p>
<p>pom.xml</p>
<pre class="brush:xml">
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.11</version>
<configuration>
<complianceLevel>1.8</complianceLevel>
<weaveDependencies>
<weaveDependency>
<groupId>org.adrianwalker.callablefrom</groupId>
<artifactId>callable-from</artifactId>
</weaveDependency>
</weaveDependencies>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>test-compile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</pre>
<h4>Overhead</h4>
<p>
Checking every annotated method introduces significant overhead, I've bench-marked the same code compiled an run with and without the aspect weaved at compile time:
</p>
<p>BenchmarkTest.java</p>
<pre class="brush:java">
package org.adrianwalker.callablefrom.example;
import java.util.Random;
import org.adrianwalker.callablefrom.CallableFrom;
import org.adrianwalker.callablefrom.CallableFromClass;
import org.junit.Test;
public final class BenchmarkTest {
private static class CallableFromRandomNumberGenerator {
private static final Random RANDOM = new Random(System.currentTimeMillis());
@CallableFrom({
@CallableFromClass(value = BenchmarkTest.class, subclasses = false)
})
public int nextInt() {
return RANDOM.nextInt();
}
}
@Test
public void testBenchmarkCallableFrom() {
long elapsed = generateRandomNumbers(1_000_000_000);
System.out.printf("%s milliseconds\n", elapsed);
}
private long generateRandomNumbers(final int n) {
CallableFromRandomNumberGenerator cfrng = new CallableFromRandomNumberGenerator();
long start = System.currentTimeMillis();
for (long i = 0; i < n; i++) {
cfrng.nextInt();
}
long end = System.currentTimeMillis();
return end - start;
}
}
</pre>
<p>Without aspect weaving:</p>
<pre>
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running org.adrianwalker.callablefrom.example.BenchmarkTest
13075 milliseconds
</pre>
<p>With aspect weaving:</p>
<pre>
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running org.adrianwalker.callablefrom.example.BenchmarkTest
81951 milliseconds
</pre>
<p>
13075 milliseconds vs 81951 milliseconds means the above code took 6.3 times longer to execute with <code>@CallableFrom</code> checking enabled. For this reason, if execution speed is important to you, I'd recommend only weaving the aspect for a test build profile and using another build profile, without the AspectJ plugin, for building your release artifacts (see the <code>callable-from-usage</code> project <code>pom.xml</code> for an example).
</p>
<h4>Conclusions</h4>
<p>
So is this the worst idea ever in the history of programming? Speed issues aside, it probably is because:
<ol>
<li>I've never seen a language that offers this sort of method call enforcement as standard.</li>
<li>An object in layer n, called by an object in layer n+1 should ideally contain no knowledge of the layer above it. The code could be changed to compare class object canonical name strings rather than the class object itself, so imports for calling classes are not needed in the callable class - but this creates a maintenance problem as refactoring tools won't automatically change the full class names in the string values and the compiler can't tell you if a class name does not exist.
</ol>
</p>
<p>
That said, I still think something like this could help stop the proliferation of spaghetti code.
</p>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/callable-from">callable-from</a></li>
</ul>
</p>
<p>
The annotations and aspect code are provided in the <code>callable-from</code> project, with an example usage project similar to the diagram at the start of this post provided in the <code>callable-from-usage</code> project.
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-31961972999229944612018-04-22T01:05:00.000+01:002018-04-27T22:42:13.910+01:00Dynamically Typed Stacks Make Me Nervous<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyPcqnysIWBgtsrdKCbgsYkrnFR48UuuFjCEX_n7mh9u__2qSrenseVS0cZAV6fzSmUkNbOf1iCv5KoYQVQoet3lrYTwisULV2kJbxaqldMdTxPcgS4m67yY5uJi1_9IN9uI5ZBShk3uY/s1600/im-one-of-the-few-people-youll-meet-whos-written-more-books-than-theyve-read.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyPcqnysIWBgtsrdKCbgsYkrnFR48UuuFjCEX_n7mh9u__2qSrenseVS0cZAV6fzSmUkNbOf1iCv5KoYQVQoet3lrYTwisULV2kJbxaqldMdTxPcgS4m67yY5uJi1_9IN9uI5ZBShk3uY/s400/im-one-of-the-few-people-youll-meet-whos-written-more-books-than-theyve-read.jpg" width="400" height="200" data-original-width="640" data-original-height="320" /></a></div>
<p>
Ten years ago <a href="https://twitter.com/theodoredziuba">Ted Dziuba</a> wrote <a href="http://widgetsandshit.com/teddziuba/2008/12/python-makes-me-nervous.html">Python Makes Me Nervous</a>, I agree with everything he wrote back then - I suppose I'm what <a href="https://twitter.com/steve_yegge?lang=en">Steve Yegge</a> would call a <a href="https://plus.google.com/110981030061712822816/posts/KaSKeg4vQtz">Software Conservative</a>. Ten years on, the static vs dynamic language debate is no closer to being over and now what makes *me* really nervous is entire dynamically typed system stacks.
</p>
<p>
To be more accurate, what I mean by dynamically typed stacks is: systems built with dynamically typed languages and composed of schema-less services, end-to-end. Let me explain ...
</p>
<p>
When I was a young programmer, if you wanted to created a web service you used <a href="https://en.wikipedia.org/wiki/XML-RPC">XML-RPC</a> or <a href="https://en.wikipedia.org/wiki/SOAP">SOAP</a>. I liked SOAP (yeah, I said it!), with a well defined <a href="https://en.wikipedia.org/wiki/Web_Services_Description_Language">WSDL</a> and some <a href="https://en.wikipedia.org/wiki/XML_Schema_(W3C)">XSD</a> you knew exactly what your client/server was going to send/receive. You generated client code and server side stub classes with <a href="http://axis.apache.org/axis2/java/core/">Apache Axis</a> and you got serialisation, de-serialisation, parsing, validation and error handling all for free.
<p>
<p>
Now everyone uses <a href="https://en.wikipedia.org/wiki/Representational_state_transfer">REST</a> and <a href="https://en.wikipedia.org/wiki/JSON">JSON</a>. Instead of well defined <a href="https://en.wikipedia.org/wiki/XML">XML</a> services, RESTful web services have to try and shoehorn requests into a HTTP GET/POST/PUT/DELETE method along with some path parameters and/or query parameters and/or request/response headers. Serialisation and validation for RESTful web services are often made an implementation concern of the application with custom serialisation/de-serialisation handlers and bespoke validation code.
</p>
<p>
I like <a href="https://en.wikipedia.org/wiki/Relational_database">Relational databases</a> (You heard me!). With a well defined <a href="https://en.wikipedia.org/wiki/Database_schema"> schema</a> you know exactly what data you're going to store and retrieve.
Database constraints enforce data correctness and referential integrity and it all gets managed for free in one place.
</p>
<p>
Now we have schema-less <a href="https://en.wikipedia.org/wiki/NoSQL">NoSQL databases</a>. These types of data stores are supposedly popular because of their horizontal scalability and fault tolerance across network partitions, but in reality, they are popular because they can be used as a data dumping ground with no need for data modelling, schema design, normalisation/de-normalisation, transaction handling, index design, query plan analysis or need to learn a query language. Data consistency, typing, referential integrity, transactions etc. are all concerns pushed on to the application to implement.
</p>
<p>
Over the last ten years, knowing fuck all about the data your system operates on until run-time has become trendy.
</p>
<p>
Enough ranting. Lets look at some code, here's a (contrived) example. Let's say we have an existing Java code base, with a <code>PersonController</code> class for persisting a person's contact details, for use in a contacts list application or something. How do you use this API? Well, the classes method signatures and a good IDE tell you everything you need to know with a minimum of key strokes:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNWan_JrayyYwhgNiXPPKzYdpH5yqHwAceTJv5EirM1-bK9TkU7aB-n5Exu2zg3h8X0KyMgrG73O4mnp6oc5DTykoa76plAVavy6hZK_4GkUeckuCfvaBinFPFZBo43Dk73Q2zDGTCpVU/s1600/java-example.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNWan_JrayyYwhgNiXPPKzYdpH5yqHwAceTJv5EirM1-bK9TkU7aB-n5Exu2zg3h8X0KyMgrG73O4mnp6oc5DTykoa76plAVavy6hZK_4GkUeckuCfvaBinFPFZBo43Dk73Q2zDGTCpVU/s400/java-example.png" width="400" height="344" data-original-width="572" data-original-height="492" /></a></div>
<p>
I know I need to pass a <code>Person</code> object to the <code>save</code> method. My IDE will tell me what properties I can set on the <code>Person</code> object. The method throws a checked exception if anything goes wrong, or returns a <code>UUID</code> if the entity is persisted correctly. Awesome, I've got everything I need to use this API in my application, I don't need to care about the implementation details.
</p>
<p>
Now let's do the same thing with Python:
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpwweR4o6oC9GPJ02KP2U918fq6tf3-EL5mpSFWWBal4ii0oA2R4XCeU3cXdKcoHvIT1-_T8aBZo1zpsxDkxlZvQfba0l0bvH830DWnn1XH06G4RHh9E_ha8WrXUfD4qnsdCGOMn4hxss/s1600/python-example.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpwweR4o6oC9GPJ02KP2U918fq6tf3-EL5mpSFWWBal4ii0oA2R4XCeU3cXdKcoHvIT1-_T8aBZo1zpsxDkxlZvQfba0l0bvH830DWnn1XH06G4RHh9E_ha8WrXUfD4qnsdCGOMn4hxss/s400/python-example.png" width="400" height="170" data-original-width="619" data-original-height="263" /></a></div>
<p>
The <code>save</code> method takes one argument, that's all I know. I'd better go have a look at the code...
</p>
<pre class="brush:python">
class PersonController(object):
URL = 'http://%s:%s/person'
def __init__(self, host='localhost', port=8888):
self.url = self.URL % (host, port)
def save(self, person):
data = person if isinstance(person, dict) else person.__dict__
response = requests.post(self.url, data=json.dumps(data))
if response.status_code != 201:
raise ControllerSaveException(response.status_code, response.json()['error'])
return uuid.UUID(response.json()['id'])
</pre>
<p>
... it makes a REST call. <code>person</code> can be anything that can be serialised to JSON and posted to the <code>/person</code> URL. I'd better go try and find the code for the web service...
</p>
<pre class="brush:python">
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r'/person/?', Handler)
]
tornado.web.Application.__init__(self, handlers)
def listen(self, address='localhost', port=8888, **kwargs):
super(Application, self).listen(port, address, **kwargs)
</pre>
<p>
... it's a <a href="http://www.tornadoweb.org">Tornado</a> REST web service, lets go check the handler class...
</p>
<pre class="brush:python">
class Handler(tornado.web.RequestHandler):
def __init__(self, application, request, **kwargs):
super(Handler, self).__init__(application, request, **kwargs)
self.publisher = Publisher()
def set_default_headers(self):
self.set_header('Content-Type', 'application/json')
def prepare(self):
try:
self.request.arguments.update(json.loads(self.request.body))
except ValueError:
self.send_error(400, message='Error parsing JSON')
def post(self):
response = json.loads(self.publisher.publish(self.request.body.decode('utf-8')))
self.set_status(response['status'])
self.write(json.dumps(response))
self.flush()
</pre>
<p>
... this tells me nothing about what the <code>person</code> object's JSON representation should contain, WTF is <code>Publisher</code> for. I'd better go find that code and take a look...
</p>
<pre class="brush:python">
class Publisher(object):
def __init__(self, host='localhost', queue='person'):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host=host))
self.channel = self.connection.channel()
result = self.channel.queue_declare(exclusive=True)
self.callback_queue = result.method.queue
self.channel.basic_consume(self.on_response, no_ack=True, queue=self.callback_queue)
self.response = None
self.correlation_id = None
self.queue = queue
def on_response(self, channel, method, properties, body):
if self.correlation_id == properties.correlation_id:
self.response = body
def publish(self, data):
self.correlation_id = str(uuid.uuid4())
self.channel.basic_publish(exchange='',
routing_key=self.queue,
properties=pika.BasicProperties(
reply_to=self.callback_queue,
correlation_id=self.correlation_id,
),
body=data)
while self.response is None:
self.connection.process_data_events()
return self.response
</pre>
<p>
... FFS, it publishes the JSON to a <a href="https://www.rabbitmq.com/">RabbitMQ</a> message queue. I'd better go find the code for the possible consumers ...
</p>
<pre class="brush:python">
class Consumer(object):
def __init__(self, host='localhost', queue='person', bucket='person'):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host))
self.channel = self.connection.channel()
self.channel.queue_declare(queue=queue)
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(self.on_request, queue=queue)
self.dataStore = datastore.DataStore(bucket)
def on_request(self, channel, method, properties, body):
request = json.loads(body)
errors = self.validate(request)
if errors:
response = {
'status': 400,
'error': ', '.join(errors)
}
else:
response = self.save(request)
self.channel.basic_publish(exchange='',
routing_key=properties.reply_to,
properties=pika.BasicProperties(
correlation_id=properties.correlation_id),
body=json.dumps(response))
self.channel.basic_ack(delivery_tag=method.delivery_tag)
def consume(self):
self.channel.start_consuming()
def validate(self, request):
errors = []
if 'first_name' not in request or not request['first_name']:
errors.append('Invalid or missing first name')
if 'last_name' not in request or not request['last_name']:
errors.append('Invalid or missing last name')
return errors
def save(self, request):
id = str(uuid.uuid4())
try:
self.dataStore.save(id, request)
response = {
'id': id,
'status': 201,
}
except Exception as e:
response = {
'status': 500,
'error': str(e)
}
return response
</pre>
<p>
... some bespoke validation code tells me I have to have <code>first_name</code> and <code>last_name</code> keys in my JSON object. Then the object gets saved to the <code>person</code> bucket in a <a href="http://basho.com/products/riak-kv/">Riak</a> database. But, what else should be in my object? Let's <code>curl</code> an existing record and have a look...
</p>
<pre>
$ curl http://127.0.0.1:10018/riak/person/b8aa0197-89db-4550-9fba-2c0d4b132b67
{"first_name": "Adrian", "last_name": "Walker"}
</pre>
<p>
... and I'm no closer to knowing exactly what should or shouldn't be in a <code>person</code> object.
</p>
<p>
What a waste of time.
</p>
<h4>Source Code</h4>
<p>
<ul>
<li>Python code available in GitHub - <a href="https://github.com/adrianwalker/dynamic-stacks-make-me-nervous">dynamic-stacks-make-me-nervous</a></li>
</ul>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-28698787697282265402018-04-01T22:41:00.001+01:002018-04-01T22:48:17.302+01:00Riak - Building a Development Environment From Source<p>
Building a <a href="http://basho.com/products/riak-kv/">Riak</a> development environment, like anything involving Linux, is needlessly complicated for no good reason. This method to build from source worked for me from a clean install of <a href="https://lubuntu.net/">Lubuntu </a> 17.10.1:
</p>
<p>
First, update your package index and install the dependencies and utilities you will need:
<p>
<pre class="brush:bash">
$ sudo apt-get update
$ sudo apt-get install build-essential autoconf libncurses5-dev libpam0g-dev openssl libssl-dev fop xsltproc unixodbc-dev git curl
</pre>
<p>
Next, navigate to user home, download <a href="https://github.com/kerl/kerl">kerl</a> and use it to build and install the <a href="https://github.com/basho/otp">Basho version of Erlang</a> (WHY?!?!). These steps took a while to complete on my machine, bear with it:
</p>
<pre class="brush:bash">
$ cd ~
$ curl -O https://raw.githubusercontent.com/kerl/kerl/master/kerl
$ chmod a+x kerl
$ ./kerl build git git://github.com/basho/otp.git OTP_R16B02_basho10 R16B02-basho10
$ ./kerl install R16B02-basho10 ~/erlang/R16B02-basho10
$ . ~/erlang/R16B02-basho10/activate
</pre>
<p>
With Erlang installed, clone the <a href="https://github.com/basho/riak">Riak source repository from GitHub</a> and build:
</p>
<pre class="brush:bash">
$ git clone https://github.com/basho/riak.git
$ cd riak
$ make rel
</pre>
<p>
Finally, create 8 separate copies of Riak to use in a cluster:
</p>
<pre class="brush:bash">
$ make devrel
</pre>
<p>
Start 3 (or more) Riak instances:
</p>
<pre class="brush:bash">
$ dev/dev1/bin/riak start
$ dev/dev2/bin/riak start
$ dev/dev3/bin/riak start
</pre>
<p>
Then join instances 2 and 3 with instance 1 to form a cluster:
</p>
<pre class="brush:bash">
$ dev/dev2/bin/riak-admin cluster join dev1@127.0.0.1
$ dev/dev3/bin/riak-admin cluster join dev1@127.0.0.1
</pre>
<p>
Check and commit the cluster plan:
</p>
<pre class="brush:bash">
$ dev/dev3/bin/riak-admin cluster plan
$ dev/dev3/bin/riak-admin cluster commit
</pre>
<p>
Monitor the cluster status until all pending changes are complete:
</p>
<pre class="brush:bash">
$ dev/dev3/bin/riak-admin cluster status
---- Cluster Status ----
Ring ready: false
+--------------------+------+-------+-----+-------+
| node |status| avail |ring |pending|
+--------------------+------+-------+-----+-------+
| (C) dev1@127.0.0.1 |valid | up |100.0| 34.4 |
| dev2@127.0.0.1 |valid | up | 0.0| 32.8 |
| dev3@127.0.0.1 |valid | up | 0.0| 32.8 |
+--------------------+------+-------+-----+-------+
$ dev/dev3/bin/riak-admin cluster status
---- Cluster Status ----
Ring ready: true
+--------------------+------+-------+-----+-------+
| node |status| avail |ring |pending|
+--------------------+------+-------+-----+-------+
| (C) dev1@127.0.0.1 |valid | up | 34.4| -- |
| dev2@127.0.0.1 |valid | up | 32.8| -- |
| dev3@127.0.0.1 |valid | up | 32.8| -- |
+--------------------+------+-------+-----+-------+
</pre>
<p>Check the cluster member status:</p>
<pre class="brush:bash">
$ dev/dev3/bin/riak-admin member-status
================================= Membership ==================================
Status Ring Pending Node
-------------------------------------------------------------------------------
valid 34.4% -- 'dev1@127.0.0.1'
valid 32.8% -- 'dev2@127.0.0.1'
valid 32.8% -- 'dev3@127.0.0.1'
-------------------------------------------------------------------------------
Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
</pre>
<p>
Congratulations, you have a development Riak cluster. Test the cluster by writing some data to a node:
</p>
<pre class="brush:bash">
$ curl -XPUT http://127.0.0.1:10018/riak/test/helloworld -H "Content-type: application/json" --data-binary "Hello World!"
</pre>
<p>
Use a browser to read the data from each node:<br/>
<a href="http://127.0.0.1:10018/riak/test/helloworld">http://127.0.0.1:10018/riak/test/helloworld</a><br/>
<a href="http://127.0.0.1:10028/riak/test/helloworld">http://127.0.0.1:10028/riak/test/helloworld</a><br/>
<a href="http://127.0.0.1:10038/riak/test/helloworld">http://127.0.0.1:10038/riak/test/helloworld</a><br/>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-83982726583675900492018-02-08T21:29:00.000+00:002019-03-30T12:48:58.713+00:00Tell 'em Steve-Dave!<p>
<a href="http://www.smodcast.com/channel/tesdpodcast">SoundCloud's web interface</a> is rubbish for downloading podcasts, but their API is pretty good, <a href="https://github.com/adrianwalker/tellemstevedave">so here's a handy Python script</a> for downloading all of your favourite <a href="https://www.tellemstevedave.com/">Tell 'em Steve-Dave!</a> episodes:
</p>
<pre class="brush:python">
import os.path
import re
import requests
API_URL = "http://api.soundcloud.com"
TRACKS_URL = API_URL + "/users/%(USER_ID)s/tracks" \
"?client_id=%(CLIENT_ID)s" \
"&offset=%(OFFSET)s" \
"&limit=%(LIMIT)s" \
"&format=json"
DOWNLOAD_URL = "https://api.soundcloud.com/tracks/%(TRACK_ID)s/download?client_id=%(CLIENT_ID)s"
CHUNK_SIZE = 16 * 1024
TESD_USER_ID = "79299245"
CLIENT_ID = "3b6b877942303cb49ff687b6facb0270"
LIMIT = 10
offset = 0
while True:
url = TRACKS_URL % {
"USER_ID": TESD_USER_ID,
"CLIENT_ID": CLIENT_ID,
"LIMIT": LIMIT,
"OFFSET": offset
}
tracks = requests.get(url).json()
if not tracks:
break
tracks = [(track["id"], track["title"]) for track in tracks]
for (id, title) in tracks:
title = str(re.sub('[^A-Za-z0-9]+', '_', title)).strip('_')
url = DOWNLOAD_URL % {"TRACK_ID": id, "CLIENT_ID": CLIENT_ID}
filename = "%s.mp3" % title
print "downloading: %s from %s" % (filename, url)
if os.path.exists(filename):
continue;
request = requests.get(url, stream=True)
with open(filename + ".tmp", 'wb') as fd:
chunks = request.iter_content(chunk_size=CHUNK_SIZE)
for chunk in chunks:
fd.write(chunk)
os.rename(filename + ".tmp", filename)
offset = offset + LIMIT
</pre>
<p>
4 colors 4 life
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-9920217586195543292018-01-27T17:37:00.003+00:002018-02-03T15:25:24.958+00:00Overengineering Shit<p>
I’ve had enough of Flickr, for all <a href="https://www.google.co.uk/search?q=flickr+sucks">the standard reasons</a>.
</p>
<p>
So I set out to build a scalable, secure, distributed, image sharing platform of my own, using open source components, tried and tested tech, with no bullshit.
</p>
<p>
It would be great, I thought, I could start small, just hosting my own photos; then I could open it up to friends and family, working out the bugs as I go, seamlessly scaling up the hardware as required. Then, who knows, I could open it up to the internet! It was going to be awesome!
</p>
</p>
I wanted nothing fancy for the implementation - only mature tech, battle tested stuff, which wasn’t going to become unsupported any time soon. And only the right tools for the job:
<ol>
<li>
Bulk uploading files over HTTP is bollocks, transferring files is a solved problem, the system should use FTP.
</li>
<li>
No custom user database, no hand rolled permissions pseudo-framework, don’t re-invent the wheel, the authentication and authorisation should be handled by LDAP.
</li>
<li>
Image storage should be implemented using a distributed, scalable filesystem, I want to just add more nodes when disk starts running low.
</li>
<li>
Image processing, such as thumbnail generation, should be asynchronous with jobs taken from a scalable message queue, that way I can add more message processors when I need to.
</li>
<li>
Simple REST webservices should be used by a client to fetch images and image metadata from the server.
</li>
<li>
And finally the web UI should be simple, responsive and avoid JavaScript framework bloat.
</li>
</ol>
</p>
<p>
My chosen implementations to satisfy the above included:
<ul>
<li>
Java 8 with <a href="https://github.com/jax-rs">JAX-RS</a> and <a href="https://jersey.github.io/">Jersey</a>
</li>
<li>
<a href="https://mina.apache.org/ftpserver-project/">Apache FTP Server</a>
</li>
<li>
<a href="http://directory.apache.org/">Apache Directory Server</a>
</li>
<li>
<a href="http://cassandra.apache.org/">Apache Cassandra</a>
</li>
<li>
<a href="https://kafka.apache.org/">Apache Kafka</a> + <a href="https://zookeeper.apache.org">Zookeeper</a>
</li>
<li>
<a href="http://tomcat.apache.org/">Apache Tomcat</a>
</li>
<li>
<a href="https://jquery.com/">jQuery</a>
</li>
</ul>
</p>
<p>
Also using <a href="https://github.com/rkalla/imgscalr">imgscalr-lib</a> for preview generation, <a href="https://avro.apache.org/">Apache Avro</a> and <a href="https://commons.apache.org/proper/commons-lang/">Apache commons-lang</a> for serialization, <a href="https://tika.apache.org/">Apache Tika</a> for image format detection, <a href="https://www.slf4j.org/">SLF4J</a> and <a href="https://logback.qos.ch/">Logback</a> for logging and finally <a href="https://www.ansible.com/">Ansible</a> for deployment.
</p>
<p>
The components logically hang together something like this:
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQiECmqBiWkpCvWsfK0zaZP9xdLvo9D8DWXf6erP7TWv3OvBVHpLHITZKJjp2a_7v951EUlzYiq4fta7PCkRCD6R8A_Zf-3Np0XI00JLAmfWC2aeWIrtNGFn9P4-GWhuNYtlILOr3iTYs/s1600/uploadserver.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQiECmqBiWkpCvWsfK0zaZP9xdLvo9D8DWXf6erP7TWv3OvBVHpLHITZKJjp2a_7v951EUlzYiq4fta7PCkRCD6R8A_Zf-3Np0XI00JLAmfWC2aeWIrtNGFn9P4-GWhuNYtlILOr3iTYs/s640/uploadserver.png" width="640" height="409" data-original-width="1294" data-original-height="827" /></a></div>
<p>
<a href="https://github.com/adrianwalker/uploadserver">The code</a> was looking good enough to get something up and running - it was time to investigate some hosting costs. I figured I would need a big-ish box to put Apache Cassandra and Apache Directory Server on, a small-ish box for the Apache Tomcat and Apache Webserver, another small-ish box for Apache FTP Server, and a medium sized box for Apache Kafka and the consumer processes.
</p>
<p>
Time to check out some recommended production hardware requirements for <a href="http://cassandra.apache.org/doc/latest/operating/hardware.html">Cassandra</a>:
</p>
<p>
<q>
<b>a minimal production server</b> requires at least 2 cores, and at least <b>8GB of RAM</b>. <b>Typical production servers</b> have 8 or more cores and at least <b>32GB of RAM</b>
</q>
</p>
<p>
And for <a href="https://docs.confluent.io/current/kafka/deployment.html">Kafka</a>:
</p>
<p>
<q>
A machine with <b>64 GB of RAM is a decent choice</b>, but 32 GB machines are not uncommon. Less than 32 GB tends to be counterproductive (you end up needing many, many small machines).
</q>
</p>
<p>
<b>Are you fucking kidding me?</b> When did minimum requirements for a database and a queue become 32GB of fucking RAM each?!
</p>
<p>
At <a href="https://www.digitalocean.com/pricing/">DigitalOcean's current prices</a>, an 8GB droplet is $40 a month and a 32GB droplet is $160 a month, and the smaller droplets anything between $5 and $20 a month.
</p>
<table class="bui-Table PricingTable">
<thead>
<tr>
<th>
Memory
</th>
<th>
vCPUs
</th>
<th>
SSD Disk
</th>
<th>
Transfer
</th>
<th>
Price
</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<strong>1 GB</strong>
</td>
<td>
1 vCPU
</td>
<td>
25 GB
</td>
<td>
1 TB
</td>
<td>
<strong>$5/mo</strong>
<br>
$0.007/hr
</td>
</tr>
<tr>
<td>
<strong>2 GB</strong>
</td>
<td>
1 vCPU
</td>
<td>
50 GB
</td>
<td>
2 TB
</td>
<td>
<strong>$10/mo</strong>
<br>
$0.015/hr
</td>
</tr>
<tr>
<td>
<strong>4 GB</strong>
</td>
<td>
2 vCPUs
</td>
<td>
80 GB
</td>
<td>
4 TB
</td>
<td>
<strong>$20/mo</strong>
<br>
$0.030/hr
</td>
</tr>
<tr>
<td>
<strong>8 GB</strong>
</td>
<td>
4 vCPUs
</td>
<td>
160 GB
</td>
<td>
5 TB
</td>
<td>
<strong>$40/mo</strong>
<br>
$0.060/hr
</td>
</tr>
<tr>
<td>
<strong>16 GB</strong>
</td>
<td>
6 vCPUs
</td>
<td>
320 GB
</td>
<td>
6 TB
</td>
<td>
<strong>$80/mo</strong>
<br>
$0.119/hr
</td>
</tr>
<tr>
<td>
<strong>32 GB</strong>
</td>
<td>
8 vCPUs
</td>
<td>
640 GB
</td>
<td>
7 TB
</td>
<td>
<strong>$160/mo</strong>
<br>
$0.238/hr
</td>
</tr>
<tr>
<td>
...
</td>
<td>
...
</td>
<td>
...
</td>
<td>
...
</td>
<td>
...
</td>
</tr>
</tbody>
</table>
<p>
I guess writing and running scalable systems requires a scalable bank balance - mine does not scale to over $200 a month just to upload some photos. Time to scrap this idea.
</p>
<p>
What I’ve ended up with is a $10 a month <a href="https://www.digitalocean.com/">DigitalOcean</a> droplet running <a href="https://www.nginx.com/">nginx</a> and SFTP and a <a href="https://github.com/adrianwalker/taffnaidphotos">half arsed Python script</a> which uses <a href="https://www.imagemagick.org">ImageMagick</a> to generate thumbnails and some static HTML - and you know what? It’s absolutely perfect for me.
</p>
<p>
Time to reflect on some almost-ten-year-old, but still relevant, wisdom from Ted Dziuba: <a href="http://widgetsandshit.com/teddziuba/2008/04/im-going-to-scale-my-foot-up-y.html">I'm Going To Scale My Foot Up Your Ass</a>
</p>
<p>
Python script to generate HTML and thumbnails:
</p>
<pre class="brush:python">
import os
from subprocess import call
CONVERT_CMD = "convert"
PREVIEW_SIZE = "150x150"
PREVIEW_PREFIX = "preview_"
HTML_EXTENSION = ".html"
IMG_EXTENSION = ".jpg"
INDEX = "index.html"
ALBUM_IMG = "/album.png"
LIST_TEMPLATE = """
<!DOCTYPE html>
<html>
<head>
<title>taffnaid.photos</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" type="text/css" href="/taffnaidphotos.css">
</head>
<body>
<div id="list" class="list">
<div id="list-nav" class="nav">
{0}
</div>
<div id="list-previews" class="previews">
{1}
</div>
</div>
</body>
</html>
"""
LIST_NAV_TEMPLATE = """
<a href="{0}" class="parent">☷</a>
"""
PREVIEW_TEMPLATE = """
<div class="preview">
<a href="{0}">
<img src="{1}" alt=":-("/>
<div class="name">{2}</div>
</a>
</div>
"""
VIEW_TEMPLATE = """
<!DOCTYPE html>
<html>
<head>
<title>taffnaid.photos</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" type="text/css" href="/taffnaidphotos.css">
</head>
<body>
<div id="view" class="view">
<div id="view-nav" class="nav">
{0}
</div>
<div id="view-image" class="image">
<img src="{1}" alt=":-("/>
<link rel="prefetch" href="{2}">
<link rel="prefetch" href="{3}">
</div>
</div>
</body>
</html>
"""
VIEW_NAV_TEMPLATE = """
<a href="{0}" class="previous">⟨</a>
<a href="{1}" class="parent">☷</a>
<a href="{2}" class="next">⟩</a>
"""
cwd = os.getcwd()
for root, dirs, files in os.walk(cwd):
dirs = sorted(dirs, reverse=True)
files = filter(lambda file: file.lower().endswith(IMG_EXTENSION), files)
files = filter(lambda file: not file.startswith(PREVIEW_PREFIX), files)
files = sorted(files)
preview_html = ""
for dir in dirs:
preview_html = PREVIEW_TEMPLATE.format(
os.path.join(dir, INDEX),
ALBUM_IMG,
dir) + preview_html
for i, file in enumerate(files):
previous = files[i - 1]
parent = os.path.join(root.replace(cwd, ""), INDEX)
next = files[(i + 1) % len(files)]
nav_html = VIEW_NAV_TEMPLATE.format(
previous + HTML_EXTENSION,
parent,
next + HTML_EXTENSION
)
preview_html = preview_html + PREVIEW_TEMPLATE.format(
file + HTML_EXTENSION,
PREVIEW_PREFIX + file,
file)
view_html = VIEW_TEMPLATE.format(
nav_html,
file,
previous,
next
)
image = os.path.abspath(os.path.join(root, file))
preview = os.path.abspath(os.path.join(os.path.dirname(image), PREVIEW_PREFIX + os.path.basename(image)))
if not os.path.exists(preview):
cmd = [
CONVERT_CMD,
"-define", "jpeg:size=%s" % PREVIEW_SIZE,
image,
"-thumbnail", "%s^" % PREVIEW_SIZE,
"-gravity", "center",
"-extent", PREVIEW_SIZE,
preview]
call(cmd)
view_file = image + HTML_EXTENSION
with open(view_file, 'w') as view_file:
view_file.write(view_html)
parent = os.path.join(os.path.dirname(root.replace(cwd, "")), INDEX)
nav_html = LIST_NAV_TEMPLATE.format(parent)
list_html = LIST_TEMPLATE.format(
nav_html,
preview_html)
index_file = os.path.join(root, INDEX)
with open(index_file, 'w') as index_file:
index_file.write(list_html)
</pre>
<p>
nginx config to resize and cache images:
</p>
<pre class="brush:plain">
proxy_cache_path /var/www/html/cache levels=1:2 keys_zone=resized;
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ ^/.*(\.jpg|\.JPG)$ {
proxy_cache resized;
proxy_cache_valid 200 10d;
proxy_pass http://127.0.0.1:9001;
}
}
server {
listen 9001;
allow 127.0.0.1;
deny all;
root /var/www/html;
location ~ ^/.*(\.jpg|\.JPG)$ {
image_filter_buffer 10M;
image_filter resize 1920 1080;
}
}
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/uploadserver">uploadserver</a></li>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/taffnaidphotos">taffnaidphotos</a></li>
</ul>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-65384795271144862692017-10-18T21:41:00.001+01:002017-10-19T20:01:47.984+01:00Use JAXB to generate classes from FHIR XSD schema<p>
Running the <a href="https://www.hl7.org/fhir/">FHIR</a> <a href="https://www.hl7.org/fhir/definitions.xml.zip">XSD schemas</a> through <a href="https://docs.oracle.com/javase/tutorial/jaxb/intro/">JAXB</a> throws a bunch of exceptions, for example:
<p>
<p>
<code>
com.sun.istack.SAXParseException2; systemId: file:../xsd/fhir-xhtml.xsd; lineNumber: 283; columnNumber: 52; Property "Lang" is already defined. Use <jaxb:property> to resolve this conflict.
</code>
</p>
<p>
<code>
com.sun.istack.SAXParseException2; systemId: file:../xsd/fhir-xhtml.xsd; lineNumber: 1106; columnNumber: 58; Property "Lang" is already defined. Use <jaxb:property> to resolve this conflict.
</code>
</p>
<p>
<code>
org.xml.sax.SAXParseException; systemId: file:../xsd/fhir-single.xsd; lineNumber: 81; columnNumber: 31; A class/interface with the same name "org.adrianwalker.fhir.resources.Code" is already in use. Use a class customization to resolve this conflict.
</code>
</p>
<p>
<code>
org.xml.sax.SAXParseException; systemId: file:../xsd/fhir-single.xsd; lineNumber: 1173; columnNumber: 34; A class/interface with the same name "org.adrianwalker.fhir.resources.Address" is already in use. Use a class customization to resolve this conflict.
</code>
</p>
<p>
Without modifying the original FHIR XSD files, the JAXB conflicts can be resolved using JAXB bindings:
</p>
<p>fhir-xhtml.xjb</p>
<pre class="brush:xml">
<bindings xmlns="http://java.sun.com/xml/ns/jaxb"
xmlns:xsi="http://www.w3.org/2000/10/XMLSchema-instance"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
version="2.1">
<bindings schemaLocation="../xsd/fhir-xhtml.xsd" version="1.0">
<!--
Fixes:-
com.sun.istack.SAXParseException2; systemId: file:../xsd/fhir-xhtml.xsd;
lineNumber: 283; columnNumber: 52; Property "Lang" is already defined. Use
<jaxb:property> to resolve this conflict.
-->
<bindings node="//xs:attributeGroup[@name='i18n']">
<bindings node=".//xs:attribute[@name='lang']">
<property name="xml:lang"/>
</bindings>
</bindings>
<!--
Fixes:-
com.sun.istack.SAXParseException2; systemId: file:../xsd/fhir-xhtml.xsd;
lineNumber: 1106; columnNumber: 58; Property "Lang" is already defined. Use
<jaxb:property> to resolve this conflict.
-->
<bindings node="//xs:element[@name='bdo']">
<bindings node=".//xs:attribute[@name='lang']">
<property name="xml:lang"/>
</bindings>
</bindings>
</bindings>
</bindings>
</pre>
<p>fhir-single.xjb</p>
<pre class="brush:xml">
<bindings xmlns="http://java.sun.com/xml/ns/jaxb"
xmlns:xsi="http://www.w3.org/2000/10/XMLSchema-instance"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
version="2.1">
<bindings schemaLocation="../xsd/fhir-single.xsd" version="1.0">
<!--
Fixes:-
org.xml.sax.SAXParseException; systemId: file:../xsd/fhir-single.xsd;
lineNumber: 81; columnNumber: 31; A class/interface with the same name
"org.adrianwalker.fhir.Code" is already in use. Use a class customization to
resolve this conflict.
-->
<bindings node="//xs:complexType[@name='code']">
<class name="CodeString" />
</bindings>
<!--
Fixes:-
org.xml.sax.SAXParseException; systemId: file:../xsd/fhir-single.xsd;
lineNumber: 1173; columnNumber: 34; A class/interface with the same name
"org.adrianwalker.fhir.Address" is already in use. Use a class customization
to resolve this conflict.
-->
<bindings node="//xs:complexType[@name='Address']">
<class name="PostalAddress" />
</bindings>
</bindings>
</bindings>
</pre>
<p>
I've used the <code>org.jvnet.jaxb2.maven2 jaxb2-maven-plugin</code> Maven plugin, configured with the <code>net.java.dev.jaxb2-commons jaxb-fluent-api</code> plugin to generate the resource classes, with fluent API mutators for method chaining.
</p>
<p>pom.xml</p>
<pre class="brush:xml">
...
<build>
<plugins>
<plugin>
<groupId>org.jvnet.jaxb2.maven2</groupId>
<artifactId>maven-jaxb2-plugin</artifactId>
<version>0.13.2</version>
<configuration>
<extension>true</extension>
<args>
<arg>-Xfluent-api</arg>
</args>
<schemaDirectory>src/main/xsd</schemaDirectory>
<bindingDirectory>src/main/xjb</bindingDirectory>
<generatePackage>org.adrianwalker.fhir.resources</generatePackage>
<plugins>
<plugin>
<groupId>net.java.dev.jaxb2-commons</groupId>
<artifactId>jaxb-fluent-api</artifactId>
<version>2.1.8</version>
</plugin>
</plugins>
</configuration>
<executions>
<execution>
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
...
</pre>
<p>
For example usage of generated classes and minimal unit testing see <code>PatientExampleTest.java</code>:
</p>
<p>PatientExampleTest.java</p>
<pre class="brush:java">
package org.adrianwalker.fhir.resources;
import java.io.ByteArrayOutputStream;
import java.io.File;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBElement;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;
import javax.xml.bind.Unmarshaller;
import javax.xml.transform.stream.StreamSource;
import org.junit.Assert;
import org.junit.BeforeClass;
import org.junit.Test;
/*
* Patient Example xml from: https://www.hl7.org/fhir/patient-example.xml.html
*/
public final class PatientExampleTest {
private static Unmarshaller unmarshaller;
private static Marshaller marshaller;
@BeforeClass
public static void setUp() throws JAXBException {
JAXBContext context = JAXBContext.newInstance(Patient.class);
unmarshaller = context.createUnmarshaller();
marshaller = context.createMarshaller();
}
@Test
public void testXmlToPatient() throws JAXBException {
Patient patient = unmarshalPatient("src/test/resources/patient-example.xml");
Assert.assertEquals("example", patient.getId().getValue());
Assert.assertEquals("Chalmers", patient.getName().get(0).getFamily().getValue());
Assert.assertEquals("Peter", patient.getName().get(0).getGiven().get(0).getValue());
Assert.assertEquals("James", patient.getName().get(0).getGiven().get(1).getValue());
}
@Test
public void testPatientToXml() throws JAXBException {
Patient patient = new Patient()
.withId(new Id().withValue("test"))
.withName(new HumanName()
.withGiven(new String().withValue("Adrian"))
.withFamily(new String().withValue("Walker")));
Assert.assertEquals(
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"
+ "<Patient xmlns=\"http://hl7.org/fhir\" xmlns:ns2=\"http://www.w3.org/1999/xhtml\">"
+ "<id value=\"test\"/>"
+ "<name>"
+ "<family value=\"Walker\"/>"
+ "<given value=\"Adrian\"/>"
+ "</name>"
+ "</Patient>",
marshalPatient(patient));
}
private Patient unmarshalPatient(final java.lang.String filename) throws JAXBException {
JAXBElement<Patient> element = unmarshaller.unmarshal(
new StreamSource(new File(filename)), Patient.class);
return element.getValue();
}
private java.lang.String marshalPatient(final Patient patient) throws JAXBException {
JAXBElement<Patient> element = new ObjectFactory().createPatient(patient);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
marshaller.marshal(element, baos);
return baos.toString();
}
}
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/fhir-jaxb">fhir-jaxb</a></li>
</ul>
</p>
<h4>Build and Test</h4>
<p>
The project is a standard Maven project which can be built with:
</p>
<pre>
mvn clean install
</pre>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-40841306928848594732017-09-24T22:21:00.001+01:002017-09-25T09:21:18.987+01:00FTP files into Apache Cassandra with Apache FtpServer<p>
<a href="https://mina.apache.org/ftpserver-project/">Apache FtpServer</a> provides an API to allow you to implement your own file system to back file uploads and downloads. Using the <a href="https://git-wip-us.apache.org/repos/asf?p=mina-ftpserver.git;a=tree;f=core/src/main/java/org/apache/ftpserver/filesystem/nativefs">native file system</a> as a guide, this project builds on a <a href="http://www.adrianwalker.org/2017/09/another-apache-cassandra-file-system.html">previous blog post - Another Apache Cassandra File System</a>, which implements a chunked file system with persistence provided by <a href="http://cassandra.apache.org/">Apache Cassandra</a> to read/write files directly to/from the database.
</p>
<p>
Make sure you clone and build the file system project and stand up the Cassandra database from <a href="https://github.com/adrianwalker/cassandra-filesystem">this git repository</a>, before using the code in this blog post - it's a required dependency.
</p>
<p>
To create an alternative file system, you need to implement three interfaces from the FtpServer <a href="https://mina.apache.org/ftpserver-project/ftplet.html">ftplet-api</a>: <code>FileSystemFactory</code>, <code>FileSystemView</code> and <code>FtpFile</code>.
</p>
<p>
CassandraFileSystemFactory.java
</p>
<pre class="brush:java">
package org.adrianwalker.ftpserver.filesystem;
import org.adrianwalker.cassandra.filesystem.controller.FileSystemController;
import org.apache.ftpserver.ftplet.FileSystemFactory;
import org.apache.ftpserver.ftplet.FileSystemView;
import org.apache.ftpserver.ftplet.FtpException;
import org.apache.ftpserver.ftplet.User;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public final class CassandraFileSystemFactory implements FileSystemFactory {
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraFileSystemFactory.class);
private final FileSystemController controller;
public CassandraFileSystemFactory(final FileSystemController controller) {
LOGGER.debug("controller = {}", controller);
if (null == controller) {
throw new IllegalArgumentException("controller is null");
}
this.controller = controller;
}
@Override
public FileSystemView createFileSystemView(final User user) throws FtpException {
LOGGER.debug("user = {}", user);
if (null == user) {
throw new IllegalArgumentException("user is null");
}
return new CassandraFileSystemView(user, controller);
}
}
</pre>
<p>
CassandraFileSystemView.java
</p>
<pre class="brush:java">
package org.adrianwalker.ftpserver.filesystem;
import static java.io.File.separator;
import org.adrianwalker.cassandra.filesystem.controller.FileSystemController;
import org.adrianwalker.cassandra.filesystem.entity.File;
import org.apache.ftpserver.ftplet.FileSystemView;
import org.apache.ftpserver.ftplet.FtpException;
import org.apache.ftpserver.ftplet.FtpFile;
import org.apache.ftpserver.ftplet.User;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.nio.file.Path;
import java.nio.file.Paths;
public final class CassandraFileSystemView implements FileSystemView {
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraFileSystemView.class);
private final User user;
private final FileSystemController controller;
private final String homeDirectory;
private String workingDirectory;
public CassandraFileSystemView(final User user, final FileSystemController controller) {
LOGGER.debug("user = {}, controller = {}", user, controller);
if (null == user) {
throw new IllegalArgumentException("user is null");
}
if (null == controller) {
throw new IllegalArgumentException("controller is null");
}
this.user = user;
this.controller = controller;
this.homeDirectory = user.getHomeDirectory();
this.workingDirectory = homeDirectory;
}
@Override
public FtpFile getHomeDirectory() throws FtpException {
LOGGER.debug("homeDirectory = {}", homeDirectory);
FtpFile file = getFile(homeDirectory);
if (!file.doesExist()) {
file = createDirectory(homeDirectory);
}
return file;
}
@Override
public FtpFile getWorkingDirectory() throws FtpException {
LOGGER.debug("workingDirectory = {}", workingDirectory);
FtpFile file = getFile(workingDirectory);
if (!file.doesExist()) {
file = createDirectory(workingDirectory);
}
return file;
}
@Override
public boolean changeWorkingDirectory(final String workingDirectory) throws FtpException {
LOGGER.debug("workingDirectory = {}", workingDirectory);
FtpFile file = getFile(workingDirectory);
boolean exists = file.doesExist();
if (exists) {
this.workingDirectory = file.getAbsolutePath();
}
return exists;
}
@Override
public FtpFile getFile(final String name) throws FtpException {
LOGGER.debug("name = {}", name);
if (null == name) {
throw new IllegalArgumentException("name is null");
}
String path = normalize(name);
File file = controller.getFile(path);
return new CassandraFtpFile(user, path, file, controller);
}
@Override
public boolean isRandomAccessible() throws FtpException {
return false;
}
@Override
public void dispose() {
}
private String normalize(final String name) {
LOGGER.debug("name = {}", name);
Path path;
if (name.startsWith(separator)) {
path = Paths.get(name);
} else {
path = Paths.get(workingDirectory, name);
}
String normalizedName = path
.normalize()
.toString();
LOGGER.debug("normalizedName = {}", normalizedName);
return normalizedName;
}
private FtpFile createDirectory(final String path) {
LOGGER.debug("path = {}", path);
File directory = new File();
directory.setName(Paths.get(path).getFileName().toString());
directory.setDirectory(true);
directory.setOwner(user.getName());
directory.setGroup(user.getName());
directory.setModified(System.currentTimeMillis());
controller.saveFile(path, directory);
return new CassandraFtpFile(user, path, directory, controller);
}
}
</pre>
<p>
CassandraFtpFile.java
</p>
<pre class="brush:java">
package org.adrianwalker.ftpserver.filesystem;
import static java.util.stream.Collectors.toList;
import org.adrianwalker.cassandra.filesystem.controller.FileSystemController;
import org.adrianwalker.cassandra.filesystem.entity.File;
import org.apache.ftpserver.ftplet.FtpFile;
import org.apache.ftpserver.ftplet.User;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.file.Paths;
import java.util.List;
public final class CassandraFtpFile implements FtpFile {
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraFtpFile.class);
private final User user;
private final String path;
private File file;
private final FileSystemController controller;
public CassandraFtpFile(
final User user,
final String path,
final File file,
final FileSystemController controller) {
LOGGER.debug("user = {}, path = {}, file = {}, controller = {}", user, path, file, controller);
if (null == user) {
throw new IllegalArgumentException("user is null");
}
if (null == path) {
throw new IllegalArgumentException("path is null");
}
if (null == controller) {
throw new IllegalArgumentException("controller is null");
}
this.user = user;
this.path = path;
this.file = file;
this.controller = controller;
}
@Override
public String getAbsolutePath() {
LOGGER.debug("path = {}", path);
return path;
}
@Override
public String getName() {
String name = file.getName();
LOGGER.debug("name = {}", name);
return name;
}
@Override
public boolean isHidden() {
boolean hidden = file.isHidden();
LOGGER.debug("hidden = {}", hidden);
return hidden;
}
@Override
public boolean isDirectory() {
boolean directory = file.isDirectory();
LOGGER.debug("directory = {}", directory);
return directory;
}
@Override
public boolean isFile() {
boolean file = !isDirectory();
LOGGER.debug("file = {}", file);
return file;
}
@Override
public boolean doesExist() {
boolean exists = file != null;
LOGGER.debug("exists = {}", exists);
return exists;
}
@Override
public boolean isReadable() {
boolean readable = doesExist();
LOGGER.debug("readable = {}", readable);
return readable;
}
@Override
public boolean isWritable() {
boolean writable = path.startsWith(user.getHomeDirectory());
LOGGER.debug("writable = {}", writable);
return writable;
}
@Override
public boolean isRemovable() {
boolean removable = doesExist() && isWritable();
LOGGER.debug("removable = {}", removable);
return removable;
}
@Override
public String getOwnerName() {
String owner = file.getOwner();
LOGGER.debug("owner = {}", owner);
return owner;
}
@Override
public String getGroupName() {
String group = file.getGroup();
LOGGER.debug("group = {}", group);
return group;
}
@Override
public int getLinkCount() {
int linkCount = file.isDirectory() ? 2 : 1;
LOGGER.debug("linkCount = {}", linkCount);
return linkCount;
}
@Override
public long getLastModified() {
long lastModified = file.getModified();
LOGGER.debug("lastModified = {}", lastModified);
return lastModified;
}
@Override
public boolean setLastModified(final long lastModified) {
LOGGER.debug("lastModified = {}", lastModified);
file.setModified(lastModified);
return controller.saveFile(path, file);
}
@Override
public long getSize() {
long size = file.getSize();
LOGGER.debug("size = {}", size);
return size;
}
@Override
public Object getPhysicalFile() {
LOGGER.debug("file = {}", file);
return file;
}
@Override
public boolean mkdir() {
LOGGER.debug("path = {}", path);
File directory = new File();
directory.setName(Paths.get(path).getFileName().toString());
directory.setDirectory(true);
directory.setOwner(user.getName());
directory.setGroup(user.getName());
return controller.saveFile(path, directory);
}
@Override
public boolean delete() {
LOGGER.debug("path = {}", path);
return controller.deleteFile(path);
}
@Override
public boolean move(final FtpFile ftpFile) {
LOGGER.debug("ftpFile = {}", ftpFile);
if (null == ftpFile) {
throw new IllegalArgumentException("ftpFile is null");
}
return controller.moveFile(path, ftpFile.getAbsolutePath());
}
@Override
public List<CassandraFtpFile> listFiles() {
LOGGER.debug("path = {}", path);
return controller.listFiles(path)
.stream().map(file -> new CassandraFtpFile(
user, Paths.get(path, file.getName()).toString(), file, controller))
.collect(toList());
}
@Override
public OutputStream createOutputStream(final long offset) throws IOException {
LOGGER.debug("offset = {}", offset);
if (offset != 0) {
throw new IllegalArgumentException("zero offset unsupported");
}
if (null == file) {
file = new File();
file.setName(Paths.get(path).getFileName().toString());
file.setDirectory(false);
file.setOwner(user.getName());
file.setGroup(user.getName());
file.setModified(System.currentTimeMillis());
controller.saveFile(path, file);
}
return new BufferedOutputStream(controller.createOutputStream(file));
}
@Override
public InputStream createInputStream(final long offset) throws IOException {
LOGGER.debug("offset = {}", offset);
if (offset != 0) {
throw new IllegalArgumentException("zero offset unsupported");
}
return new BufferedInputStream(controller.createInputStream(file));
}
}
</pre>
<p>
Example usage when used with an embedded FTP server:
</p>
<pre class="brush:java">
private void exampleUsage() throws FtpException {
ListenerFactory listenerFactory = new ListenerFactory();
listenerFactory.setPort(8021);
FtpServerFactory serverFactory = new FtpServerFactory();
serverFactory.addListener("default", listenerFactory.createListener());
Cluster cluster = new Cluster.Builder()
.addContactPoints("127.0.0.1")
.withPort(9042)
.build();
Session session = cluster.connect("filesystem");
FileSystemController controller = new FileSystemController(session);
serverFactory.setFileSystem(new CassandraFileSystemFactory(controller));
PropertiesUserManagerFactory userManagerFactory = new PropertiesUserManagerFactory();
userManagerFactory.setFile(new File("users.properties"));
serverFactory.setUserManager(userManagerFactory.createUserManager());
FtpServer server = serverFactory.createServer();
server.start();
}
</pre>
<p>
With a users properties file, where the test username is <code>testuser</code>, and the MD5 encoded password is <code>password</code>.
</p>
<p>users.properties</p>
<pre class="brush:text">
ftpserver.user.testuser.homedirectory=/testuser
ftpserver.user.testuser.userpassword=5f4dcc3b5aa765d61d8327deb882cf99
ftpserver.user.testuser.maxloginnumber=3
ftpserver.user.testuser.writepermission=true
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/ftpserver-filesystem">ftpserver-filesystem</a></li>
</ul>
</p>
<h4>Build and Test</h4>
<p>
The project is a standard Maven project which can be built with:
</p>
<pre>
mvn clean install
</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-27731320905503417852017-09-17T18:42:00.000+01:002017-09-17T18:51:45.868+01:00Java Turing Machine<p>
Here is a Turing Machine implemented in Java as described by the Wikipedia article:
<br/>
<a href="https://en.wikipedia.org/wiki/Turing_machine">https://en.wikipedia.org/wiki/Turing_machine</a>
</p>
<p>
With the copy subroutine test taken from:
<br/>
<a href="https://en.wikipedia.org/wiki/Turing_machine_examples">https://en.wikipedia.org/wiki/Turing_machine_examples</a>
</p>
<p>
Tape.java
</p>
<pre class="brush:java">
package org.adrianwalker.turingmachine;
import static java.util.stream.Collectors.toList;
import static java.util.stream.IntStream.range;
import static java.util.stream.IntStream.rangeClosed;
import java.util.List;
import java.util.TreeMap;
public final class Tape {
private final TreeMap<Integer, String> cells;
private final String blank;
public Tape(final String blank) {
this.cells = new TreeMap<>();
this.blank = blank;
}
public List<String> getCells() {
return rangeClosed(cells.firstKey(), cells.lastKey())
.boxed()
.map(i -> getCell(i))
.collect(toList());
}
public void putCells(final List<String> symbols) {
range(0, symbols.size())
.boxed()
.forEach(i -> putCell(i, symbols.get(i)));
}
public String getCell(final int position) {
return cells.getOrDefault(position, blank);
}
public void putCell(final int position, final String symbol) {
cells.put(position, symbol);
}
}
</pre>
<p>
Head.java
</p>
<pre class="brush:java">
package org.adrianwalker.turingmachine;
public final class Head {
private final Tape tape;
private final String leftSymbol;
private final String rightSymbol;
private final String noOpSymbol;
private int position = 0;
public Head(
final Tape tape,
final String leftSymbol, final String rightSymbol, final String noOpSymbol) {
this.tape = tape;
this.leftSymbol = leftSymbol;
this.rightSymbol = rightSymbol;
this.noOpSymbol = noOpSymbol;
}
public void move(final String symbol) {
if (noOpSymbol.equals(symbol)) {
return;
}
if (leftSymbol.equals(symbol)) {
position -= 1;
} else if (rightSymbol.equals(symbol)) {
position += 1;
}
}
public String read() {
return tape.getCell(position);
}
public void write(final String symbol) {
if (noOpSymbol.equals(symbol)) {
return;
}
tape.putCell(position, symbol);
}
}
</pre>
<p>
StateRegister.java
</p>
<pre class="brush:java">
package org.adrianwalker.turingmachine;
public final class StateRegister {
private final String haltState;
private String state;
public StateRegister(final String haltState, final String startState) {
this.haltState = haltState;
this.state = startState;
}
public boolean isHaltState() {
return state.equals(haltState);
}
public String getState() {
return state;
}
public void setState(final String state) {
this.state = state;
}
}
</pre>
<p>
Table.java
</p>
<pre class="brush:java">
package org.adrianwalker.turingmachine;
import java.util.HashMap;
import java.util.Map;
public final class Table {
public static final class Entry {
private final String state;
private final String symbol;
private final String writeSymbol;
private final String moveTape;
private final String nextState;
public Entry(
final String state, final String symbol,
final String writeSymbol, final String moveTape, final String nextState) {
this.state = state;
this.symbol = symbol;
this.writeSymbol = writeSymbol;
this.moveTape = moveTape;
this.nextState = nextState;
}
public String getState() {
return state;
}
public String getSymbol() {
return symbol;
}
public String getWriteSymbol() {
return writeSymbol;
}
public String getMoveTape() {
return moveTape;
}
public String getNextState() {
return nextState;
}
}
private static final String SEPARATOR = "_";
private final Map<String, Entry> table;
public Table() {
table = new HashMap<>();
}
public void put(
final String state, final String symbol,
final String writeSymbol, final String moveTape, final String nextState) {
table.put(
state + SEPARATOR + symbol,
new Entry(state, symbol, writeSymbol, moveTape, nextState));
}
public Entry get(final String state, final String symbol) {
return table.get(state + SEPARATOR + symbol);
}
}
</pre>
<p>
TuringMachine.java
</p>
<pre class="brush:java">
package org.adrianwalker.turingmachine;
import org.adrianwalker.turingmachine.Table.Entry;
public final class TuringMachine {
private final Head head;
private final StateRegister stateRegister;
private final Table table;
public TuringMachine(final Head head, final StateRegister stateRegister, final Table table) {
this.head = head;
this.stateRegister = stateRegister;
this.table = table;
}
public long execute() {
long steps = 0;
while (!stateRegister.isHaltState()) {
steps++;
String state = stateRegister.getState();
String symbol = head.read();
Entry entry = table.get(state, symbol);
head.write(entry.getWriteSymbol());
head.move(entry.getMoveTape());
stateRegister.setState(entry.getNextState());
}
return steps;
}
}
</pre>
<p>
TuringMachineTest.java
</p>
<pre class="brush:java">
package org.adrianwalker.turingmachine;
import static java.util.Arrays.asList;
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public final class TuringMachineTest {
private static final String BLANK = "0";
private static final String MOVE_LEFT = "L";
private static final String MOVE_RIGHT = "R";
private static final String NO_OP = "N";
private static final String HALT_STATE = "H";
private static final String START_STATE = "A";
@Test
public void testBusyBeaver() {
Tape tape = new Tape(BLANK);
Head head = new Head(tape, MOVE_LEFT, MOVE_RIGHT, NO_OP);
StateRegister stateRegister = new StateRegister(HALT_STATE, START_STATE);
Table table = new Table();
table.put("A", "0", "1", "R", "B");
table.put("A", "1", "1", "L", "C");
table.put("B", "0", "1", "L", "A");
table.put("B", "1", "1", "R", "B");
table.put("C", "0", "1", "L", "B");
table.put("C", "1", "1", "N", "H");
TuringMachine machine = new TuringMachine(head, stateRegister, table);
long steps = machine.execute();
assertEquals(13, steps);
assertEquals(asList("1", "1", "1", "1", "1", "1"), tape.getCells());
}
@Test
public void testCopySubroutine() {
Tape tape = new Tape(BLANK);
tape.putCells(asList("1", "1", "1"));
Head head = new Head(tape, MOVE_LEFT, MOVE_RIGHT, NO_OP);
StateRegister stateRegister = new StateRegister(HALT_STATE, START_STATE);
Table table = new Table();
table.put("A", "0", "N", "N", "H");
table.put("A", "1", "0", "R", "B");
table.put("B", "0", "0", "R", "C");
table.put("B", "1", "1", "R", "B");
table.put("C", "0", "1", "L", "D");
table.put("C", "1", "1", "R", "C");
table.put("D", "0", "0", "L", "E");
table.put("D", "1", "1", "L", "D");
table.put("E", "0", "1", "R", "A");
table.put("E", "1", "1", "L", "E");
TuringMachine machine = new TuringMachine(head, stateRegister, table);
long steps = machine.execute();
assertEquals(28, steps);
assertEquals(asList("1", "1", "1", "0", "1", "1", "1"), tape.getCells());
}
}
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/turing-machine">turing-machine</a></li>
</ul>
</p>
<h4>Build and Test</h4>
<p>
The project is a standard Maven project which can be built with:
</p>
<pre>
mvn clean install
</pre>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-54133424001882656622017-09-10T23:10:00.001+01:002017-09-10T23:40:05.246+01:00Another Apache Cassandra File System<p>
I want to be able to store files in <a href="http://cassandra.apache.org/">Apache Cassandra</a> from a Java application, using something like <a href="https://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/analytics/cfsAbout.html">CFS</a> or <a href="https://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/analytics/aboutDsefs.html">DSEFS</a>, but both of those appear to be proprietary, part of Datastax Enterprise and closed source.
</p>
<p>
<a href="http://tuplejump.github.io/calliope/snackfs.html">SnackFS</a> seems like a good open source alternative, written in Scala, but I have a specific use case in mind and want control over the implementation, so decided to roll my own.
</p>
<p>
The schema for the file system contains four tables, using a key space called ‘filesystem’, the CQL looks like this:
</p>
<p>
filesystem.cql
</p>
<pre class="brush:sql">
CREATE TABLE filesystem.file (
id uuid,
name text,
size bigint,
modified bigint,
group text,
owner text,
hidden boolean,
directory boolean,
PRIMARY KEY (id)
);
CREATE TABLE filesystem.chunk (
file_id uuid,
chunk_number int,
content blob,
PRIMARY KEY (file_id, chunk_number)
);
CREATE TABLE filesystem.path (
path text,
file_id uuid,
PRIMARY KEY (path)
);
CREATE TABLE filesystem.parent_path (
path text,
file_id uuid,
PRIMARY KEY (path, file_id)
);
</pre>
<h4>file table</h4>
<p>
The file table has a UUID primary/partitioning key, the ‘id’ column, so a random UUID can be used to enable even distribution around the ring and eliminate hot spots. The file table also contains information about the file: the file name, size in bytes, last modified time, group name, owner name, hidden flag and directory flag. The file table does not contain any information about the file’s absolute path or the contents of the file.
<p>
<h4>chunk table</h4>
<p>
The chunk table stores chunks of file content as a BLOB in the ‘content’ column. The <a href="https://docs.datastax.com/en/cql/3.3/cql/cql_reference/blob_r.html">Datastax site recommends</a> using a relatively small BLOB size:
</p>
<p>
"The maximum theoretical size for a blob is 2 GB. The practical limit on blob size, however, is less than 1 MB."
</p>
<p>
The chunk table’s primary/partitioning key, the ‘file_id’ column, is a UUID and is intended to be equal to the corresponding file ID in the file table. This means a file’s details and content can be read using the same ID, and the file record and the corresponding chunk record(s) will reside on the same node in the ring. The chunk table also has a clustering column, ‘chunk_number’, which holds sequential file content chunk index numbers. This allows chunks to be read in the correct order and allows all the chunks for a given file ID to be deleted with one query.
</p>
<h4>path and parent_path tables</h4>
<p>
The path table is used as an inverted index to map an absolute file path to a file ID. It’s primary/partitioning key, the ‘path’ column, contains the file’s absolute path, and the ‘file_id’ column is the corresponding file UUID.
</p>
<p>
The parent_path table is used as an inverted index to map an absolute directory path to multiple file ID’s. It’s primary/partitioning key, the ‘path’ column, contains the file’s parent directory’s absolute path. The parent_path table also has a clustering column, the ‘file_id‘ column, this allows directory listing by querying the ‘path’ column primary key and returning all the file ID’s contained by the directory.
<p>
<p>
Moving files is accomplished by deleting a file’s ‘path’ and ‘parent_path’ entries and inserting new path information. No changes to ‘file’ and ‘chunk’ tables are required.
</p>
<p>
The entity classes mapped to the tables are:
</p>
<p>
File.java
</p>
<pre class="brush:java">
package org.adrianwalker.cassandra.filesystem.entity;
import com.datastax.driver.core.utils.UUIDs;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import java.util.UUID;
@Table(keyspace = "filesystem", name = "file")
public final class File {
private UUID id;
private String name;
private long size;
private long modified;
private String group;
private String owner;
private boolean hidden;
private boolean directory;
public File() {
}
@PartitionKey
@Column(name = "id")
public UUID getId() {
if (null == id) {
id = UUIDs.random();
}
return id;
}
public void setId(final UUID id) {
this.id = id;
}
@Column(name = "name")
public String getName() {
return name;
}
public void setName(final String name) {
this.name = name;
}
@Column(name = "size")
public long getSize() {
return size;
}
public void setSize(final long size) {
this.size = size;
}
@Column(name = "modified")
public long getModified() {
return modified;
}
public void setModified(final long modified) {
this.modified = modified;
}
@Column(name = "group")
public String getGroup() {
return group;
}
public void setGroup(final String group) {
this.group = group;
}
@Column(name = "owner")
public String getOwner() {
return owner;
}
public void setOwner(final String owner) {
this.owner = owner;
}
@Column(name = "hidden")
public boolean isHidden() {
return hidden;
}
public void setHidden(final boolean hidden) {
this.hidden = hidden;
}
@Column(name = "directory")
public boolean isDirectory() {
return directory;
}
public void setDirectory(final boolean directory) {
this.directory = directory;
}
}
</pre>
<p>
Chunk.java
</p>
<pre class="brush:java">
package org.adrianwalker.cassandra.filesystem.entity;
import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import java.nio.ByteBuffer;
import java.util.UUID;
@Table(keyspace = "filesystem", name = "chunk")
public final class Chunk {
private UUID fileId;
private int chunkNumber;
private ByteBuffer content;
public Chunk() {
}
@PartitionKey
@Column(name = "file_id")
public UUID getFileId() {
return fileId;
}
public void setFileId(final UUID fileId) {
this.fileId = fileId;
}
@ClusteringColumn
@Column(name = "chunk_number")
public int getChunkNumber() {
return chunkNumber;
}
public void setChunkNumber(final int chunkNumber) {
this.chunkNumber = chunkNumber;
}
@Column(name = "content")
public ByteBuffer getContent() {
return content;
}
public void setContent(final ByteBuffer content) {
this.content = content;
}
}
</pre>
<p>
Path.java
</p>
<pre class="brush:java">
package org.adrianwalker.cassandra.filesystem.entity;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import java.util.UUID;
@Table(keyspace = "filesystem", name = "path")
public final class Path {
private String path;
private UUID fileId;
public Path() {
}
public Path(final String path, final UUID fileId) {
this.path = path;
this.fileId = fileId;
}
@PartitionKey
@Column(name = "path")
public String getPath() {
return path;
}
public void setPath(final String path) {
this.path = path;
}
@Column(name = "file_id")
public UUID getFileId() {
return fileId;
}
public void setFileId(final UUID fileId) {
this.fileId = fileId;
}
}
</pre>
<p>
ParentPath.java
</p>
<pre class="brush:java">
package org.adrianwalker.cassandra.filesystem.entity;
import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import java.util.UUID;
@Table(keyspace = "filesystem", name = "parent_path")
public final class ParentPath {
private String path;
private UUID fileId;
public ParentPath() {
}
public ParentPath(final String path, final UUID fileId) {
this.path = path;
this.fileId = fileId;
}
@PartitionKey
@Column(name = "path")
public String getPath() {
return path;
}
public void setPath(final String path) {
this.path = path;
}
@ClusteringColumn
@Column(name = "file_id")
public UUID getFileId() {
return fileId;
}
public void setFileId(final UUID fileId) {
this.fileId = fileId;
}
}
</pre>
<p>
The class for controlling file operations and creating chunked data input and output streams:
</p>
<p>
FileSystemController.java
</p>
<pre class="brush:java">
package org.adrianwalker.cassandra.filesystem.controller;
import static java.util.Collections.EMPTY_LIST;
import static java.util.stream.Collectors.toList;
import com.datastax.driver.core.Session;
import com.datastax.driver.mapping.Mapper;
import com.datastax.driver.mapping.MappingManager;
import com.datastax.driver.mapping.Result;
import com.datastax.driver.mapping.annotations.Accessor;
import com.datastax.driver.mapping.annotations.Param;
import com.datastax.driver.mapping.annotations.Query;
import org.adrianwalker.cassandra.filesystem.entity.Chunk;
import org.adrianwalker.cassandra.filesystem.entity.File;
import org.adrianwalker.cassandra.filesystem.entity.ParentPath;
import org.adrianwalker.cassandra.filesystem.entity.Path;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.file.Paths;
import java.util.List;
import java.util.UUID;
public final class FileSystemController {
@Accessor
private interface ParentPathAccessor {
@Query("SELECT * FROM parent_path WHERE path = :path")
Result<ParentPath> selectParentPathByPath(@Param("path") String path);
}
@Accessor
private interface FileAccessor {
@Query("SELECT * FROM file WHERE id IN :ids")
Result<File> selectFilesByIds(@Param("ids") List<UUID> ids);
}
@Accessor
private interface ChunkAccessor {
@Query("DELETE FROM chunk WHERE file_id = :file_id")
void deleteChunksByFileId(@Param("file_id") UUID fileId);
}
private static final Logger LOGGER = LoggerFactory.getLogger(FileSystemController.class);
private final Mapper<ParentPath> parentPathMapper;
private final Mapper<Path> pathMapper;
private final Mapper<File> fileMapper;
private final Mapper<Chunk> chunkMapper;
private final ParentPathAccessor parentPathAccessor;
private final FileAccessor fileAccessor;
private final ChunkAccessor chunkAccessor;
public FileSystemController(final Session session) {
LOGGER.debug("session = {}", session);
if (null == session) {
throw new IllegalArgumentException("session is null");
}
MappingManager manager = new MappingManager(session);
parentPathMapper = manager.mapper(ParentPath.class);
pathMapper = manager.mapper(Path.class);
fileMapper = manager.mapper(File.class);
chunkMapper = manager.mapper(Chunk.class);
parentPathAccessor = manager.createAccessor(ParentPathAccessor.class);
fileAccessor = manager.createAccessor(FileAccessor.class);
chunkAccessor = manager.createAccessor(ChunkAccessor.class);
}
public File getFile(final String path) {
LOGGER.debug("path = {}", path);
if (null == path) {
throw new IllegalArgumentException("path is null");
}
Path filePath = pathMapper.get(path);
if (null == filePath) {
return null;
}
return fileMapper.get(filePath.getFileId());
}
public File saveFile(final String path, final File file) {
LOGGER.debug("path = {}, file = {}", path, file);
if (null == path) {
throw new IllegalArgumentException("path is null");
}
if (null == file) {
throw new IllegalArgumentException("file is null");
}
if (null == file.getId()) {
file.setId(UUID.randomUUID());
}
pathMapper.save(new Path(path, file.getId()));
parentPathMapper.save(new ParentPath(getParent(path), file.getId()));
file.setModified(System.currentTimeMillis());
fileMapper.save(file);
return file;
}
public boolean deleteFile(final String path) {
LOGGER.debug("path = {}", path);
if (null == path) {
throw new IllegalArgumentException("path is null");
}
File file = getFile(path);
if (null == file) {
return false;
}
pathMapper.delete(path);
String parentPath = getParent(path);
parentPathMapper.delete(parentPath, file.getId());
chunkAccessor.deleteChunksByFileId(file.getId());
fileMapper.delete(file.getId());
return true;
}
public List<File> listFiles(final String parentPath) {
LOGGER.debug("parentPath = {}", parentPath);
if (null == parentPath) {
throw new IllegalArgumentException("parentPath is null");
}
List<UUID> ids = getFileIds(parentPath);
List<File> files;
if (ids.isEmpty()) {
files = EMPTY_LIST;
} else {
files = fileAccessor.selectFilesByIds(ids).all();
}
return files;
}
public void moveFile(final String fromPath, final String toPath) {
LOGGER.debug("fromPath = {}, toPath = {}", fromPath, toPath);
if (null == fromPath) {
throw new IllegalArgumentException("fromPath is null");
}
if (null == toPath) {
throw new IllegalArgumentException("toPath is null");
}
File file = getFile(fromPath);
if (null == file) {
return;
}
String toParentPath = getParent(toPath);
pathMapper.save(new Path(toPath, file.getId()));
parentPathMapper.save(new ParentPath(toParentPath, file.getId()));
String fromParentPath = getParent(fromPath);
if (!fromPath.equals(toPath)) {
pathMapper.delete(fromPath);
}
if (!fromParentPath.equals(toParentPath)) {
parentPathMapper.delete(fromParentPath, file.getId());
}
file.setName(getFileName(toPath));
file.setModified(System.currentTimeMillis());
fileMapper.save(file);
}
public OutputStream createOutputStream(final File file) {
LOGGER.debug("file = {}", file);
if (null == file) {
throw new IllegalArgumentException("file is null");
}
chunkAccessor.deleteChunksByFileId(file.getId());
return new OutputStream() {
private static final int CAPACITY = 1 * 1024 * 1024;
private Chunk chunk = null;
private int chunkNumber = 0;
private long bytesWritten = 0;
@Override
public void write(final int b) throws IOException {
if (null == chunk) {
chunk = new Chunk();
chunk.setFileId(file.getId());
chunk.setChunkNumber(chunkNumber);
chunk.setContent(ByteBuffer.allocate(CAPACITY));
}
ByteBuffer content = chunk.getContent();
content.put((byte) (b & 0xFF));
if (content.position() == content.limit()) {
save(content);
}
}
@Override
public void close() throws IOException {
if (null != chunk) {
ByteBuffer content = chunk.getContent();
save(content);
}
file.setSize(bytesWritten);
file.setModified(System.currentTimeMillis());
fileMapper.save(file);
}
private void save(final ByteBuffer content) {
content.flip();
chunkMapper.save(chunk);
chunk = null;
chunkNumber++;
bytesWritten += content.limit();
}
};
}
public InputStream createInputStream(final File file) {
LOGGER.debug("file = {}", file);
if (null == file) {
throw new IllegalArgumentException("file is null");
}
return new InputStream() {
private Chunk chunk = null;
private int chunkNumber = 0;
private long bytesRead = 0;
@Override
public int read() throws IOException {
if (bytesRead == file.getSize()) {
return -1;
}
if (null == chunk) {
chunk = chunkMapper.get(file.getId(), chunkNumber);
}
ByteBuffer content = chunk.getContent();
byte b = content.get();
if (content.position() == content.limit()) {
chunk = null;
chunkNumber++;
bytesRead += content.position();
}
return b & 0xFF;
}
};
}
private List<UUID> getFileIds(final String parentPath) {
return parentPathAccessor.selectParentPathByPath(parentPath)
.all()
.stream()
.map(pp -> pp.getFileId())
.collect(toList());
}
private String getParent(final String path) {
java.nio.file.Path parent = Paths.get(path).getParent();
if (null == parent) {
throw new IllegalArgumentException("invalid path");
}
return parent.toString();
}
private String getFileName(final String path) {
java.nio.file.Path fileName = Paths.get(path).getFileName();
if (null == fileName) {
throw new IllegalArgumentException("invalid path");
}
return fileName.toString();
}
}
</pre>
<p>
A simple example to create a directory, create a file, write to the file, list directory contents and read from the file:
</p>
<pre class="brush:java">
private void exampleUsage() {
Cluster cluster = new Cluster.Builder()
.addContactPoints("localhost")
.build();
Session session = cluster.connect("filesystem");
FileSystemController controller = new FileSystemController(session);
// create a directory
File dir = new File();
dir.setName("testdir");
dir.setDirectory(true);
dir.setOwner("test");
dir.setGroup("test");
dir.setHidden(false);
controller.saveFile("/testdir", dir);
// create a file
File file = new File();
file.setName("testfile.txt");
file.setDirectory(false);
file.setOwner("test");
file.setGroup("test");
file.setHidden(false);
file = controller.saveFile("/testdir/testfile.txt", file);
// write contents to file
OutputStream os = new BufferedOutputStream(controller.createOutputStream(file));
os.write("test content".getBytes());
os.flush();
os.close();
// list files
controller.listFiles("/testdir").forEach(f -> {
System.out.println(f.getId() + "\t" + f.getName() + "\t" + f.getSize());
});
// read contents of file
InputStream in = new BufferedInputStream(controller.createInputStream(file));
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
System.out.println(reader.readLine());
reader.close();
in.close();
session.close();
cluster.close();
}
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/cassandra-filesystem">cassandra-filesystem</a></li>
</ul>
</p>
<h4>Build and Test</h4>
<p>
The project is a standard Maven project which can be built with:
</p>
<pre>
mvn clean install
</pre>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-26851204549195171262017-09-08T20:19:00.004+01:002017-09-15T16:07:26.026+01:00Apache FtpServer LDAP User Manager<p>
<a href="https://mina.apache.org/ftpserver-project/">Apache FtpServer</a> used to be bundled with an LDAP User Manager for authentication, but it was deleted from the repository in <a href="https://git-wip-us.apache.org/repos/asf?p=mina-ftpserver.git;a=commit;h=6fa5544933aa1f54436d7b6d7dbf549c32851f2c">this commit</a> in 2008.
</p>
<p>
Here is an alternative implementation:
</p>
<p>LdapUserManager.java</p>
<pre class="brush:java">
package org.adrianwalker.ftpserver.usermanager.ldap;
import static java.lang.String.format;
import static org.apache.directory.ldap.client.api.search.FilterBuilder.and;
import static org.apache.directory.ldap.client.api.search.FilterBuilder.contains;
import static org.apache.directory.ldap.client.api.search.FilterBuilder.present;
import org.apache.directory.api.ldap.model.entry.Attribute;
import org.apache.directory.api.ldap.model.entry.Entry;
import org.apache.directory.api.ldap.model.exception.LdapInvalidAttributeValueException;
import org.apache.directory.api.ldap.model.message.SearchScope;
import org.apache.directory.api.ldap.model.name.Dn;
import org.apache.directory.ldap.client.api.search.FilterBuilder;
import org.apache.directory.ldap.client.template.EntryMapper;
import org.apache.directory.ldap.client.template.LdapConnectionTemplate;
import org.apache.directory.ldap.client.template.exception.PasswordException;
import org.apache.ftpserver.ftplet.Authentication;
import org.apache.ftpserver.ftplet.AuthenticationFailedException;
import org.apache.ftpserver.ftplet.FtpException;
import org.apache.ftpserver.ftplet.User;
import org.apache.ftpserver.usermanager.UsernamePasswordAuthentication;
import org.apache.ftpserver.usermanager.impl.AbstractUserManager;
import org.apache.ftpserver.usermanager.impl.BaseUser;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.List;
public final class LdapUserManager extends AbstractUserManager {
private static final Logger LOGGER = LoggerFactory.getLogger(LdapUserManager.class);
private static final String ATTR_OBJECT_CLASS = "objectClass";
private static final String ATTR_UID = "uid";
private static final String ATTR_CN = "cn";
private static final String ATTR_SN = "sn";
private static final String ATTR_USER_PASSWORD = "userPassword";
private static final String ATTR_UNIX_FILE_PATH = "unixFilePath";
private static final String ATTR_PWD_ATTRBUTE = "pwdAttribute";
private static final String ATTR_PWD_MAX_IDLE = "pwdMaxIdle";
private static final String ATTR_PWD_LOCKOUT = "pwdLockout";
private static final String OBJECT_CLASS_INET_ORG_PERSON = "inetOrgPerson";
private static final String OBJECT_CLASS_EXTENSIBLE_OBJECT = "extensibleObject";
private final LdapConnectionTemplate ldapConnectionTemplate;
private final String userBaseDn;
public LdapUserManager(
final LdapConnectionTemplate ldapConnectionTemplate,
final String userBaseDn) {
this.ldapConnectionTemplate = ldapConnectionTemplate;
this.userBaseDn = userBaseDn;
}
@Override
public User getUserByName(final String name) throws FtpException {
LOGGER.debug("name = {}", name);
if (null == name) {
throw new IllegalArgumentException("name is null");
}
Dn dn = ldapConnectionTemplate.newDn(format("%s=%s,%s", ATTR_UID, name, userBaseDn));
return ldapConnectionTemplate.lookup(dn, entry -> createUser(entry));
}
@Override
public String[] getAllUserNames() throws FtpException {
Dn dn = ldapConnectionTemplate.newDn(userBaseDn);
FilterBuilder filter = and(
present(ATTR_UID),
contains(ATTR_OBJECT_CLASS, OBJECT_CLASS_INET_ORG_PERSON));
EntryMapper<String> mapper = entry -> toString(entry.get(ATTR_UID));
List<String> userNames = ldapConnectionTemplate.search(dn, filter, SearchScope.ONELEVEL, mapper);
LOGGER.debug("userNames = {}", userNames);
return userNames.toArray(new String[userNames.size()]);
}
@Override
public void delete(final String name) throws FtpException {
LOGGER.debug("name = {}", name);
if (null == name) {
throw new IllegalArgumentException("name is null");
}
Dn dn = ldapConnectionTemplate.newDn(format("%s=%s,%s", ATTR_UID, name, userBaseDn));
ldapConnectionTemplate.delete(dn);
}
@Override
public void save(final User user) throws FtpException {
LOGGER.debug("user = {}", user);
if (null == user) {
throw new IllegalArgumentException("user is null");
}
Dn dn = ldapConnectionTemplate.newDn(format("%s=%s,%s", ATTR_UID, user.getName(), userBaseDn));
String[] objectClasses = {
OBJECT_CLASS_INET_ORG_PERSON, OBJECT_CLASS_EXTENSIBLE_OBJECT
};
Attribute[] attributes = {
ldapConnectionTemplate.newAttribute(ATTR_OBJECT_CLASS, objectClasses),
ldapConnectionTemplate.newAttribute(ATTR_CN, user.getName()),
ldapConnectionTemplate.newAttribute(ATTR_SN, user.getName()),
ldapConnectionTemplate.newAttribute(ATTR_USER_PASSWORD, user.getPassword()),
ldapConnectionTemplate.newAttribute(ATTR_PWD_ATTRBUTE, ATTR_USER_PASSWORD),
ldapConnectionTemplate.newAttribute(ATTR_UNIX_FILE_PATH, user.getHomeDirectory()),
ldapConnectionTemplate.newAttribute(ATTR_PWD_MAX_IDLE, toString(user.getMaxIdleTime())),
ldapConnectionTemplate.newAttribute(ATTR_PWD_LOCKOUT, toString(!user.getEnabled()))
};
ldapConnectionTemplate.add(dn, attributes);
}
@Override
public boolean doesExist(final String name) throws FtpException {
LOGGER.debug("name = {}", name);
if (null == name) {
throw new IllegalArgumentException("name is null");
}
return null != getUserByName(name);
}
@Override
public User authenticate(final Authentication auth) throws AuthenticationFailedException {
LOGGER.debug("auth = {}", auth);
if (null == auth) {
throw new IllegalArgumentException("auth is null");
}
boolean isUsernamePasswordAuth = auth instanceof UsernamePasswordAuthentication;
if (!isUsernamePasswordAuth) {
throw new AuthenticationFailedException();
}
UsernamePasswordAuthentication usernamePasswordAuth = (UsernamePasswordAuthentication) auth;
String username = usernamePasswordAuth.getUsername();
String password = usernamePasswordAuth.getPassword();
Dn dn = ldapConnectionTemplate.newDn(format("%s=%s,%s", ATTR_UID, username, userBaseDn));
try {
ldapConnectionTemplate.authenticate(dn, password.toCharArray());
} catch (final PasswordException pe) {
LOGGER.error(pe.getMessage(), pe);
throw new AuthenticationFailedException(pe);
}
try {
return getUserByName(username);
} catch (final FtpException fe) {
LOGGER.error(fe.getMessage(), fe);
throw new AuthenticationFailedException(fe);
}
}
private User createUser(final Entry entry) throws LdapInvalidAttributeValueException {
BaseUser user = new BaseUser();
user.setName(toString(entry.get(ATTR_UID)));
user.setHomeDirectory(toString(entry.get(ATTR_UNIX_FILE_PATH)));
user.setMaxIdleTime(toInt(entry.get(ATTR_PWD_MAX_IDLE)));
user.setEnabled(!toBoolean(entry.get(ATTR_PWD_LOCKOUT)));
return user;
}
private boolean toBoolean(final Attribute attribute) throws LdapInvalidAttributeValueException {
return Boolean.parseBoolean(toString(attribute));
}
private int toInt(final Attribute attribute) throws LdapInvalidAttributeValueException {
return Integer.parseInt(toString(attribute));
}
private String toString(final Attribute attribute) throws LdapInvalidAttributeValueException {
return attribute.getString();
}
private String toString(final int value) {
return String.valueOf(value);
}
private String toString(final boolean value) {
return String.valueOf(value);
}
}
</pre>
<p>
An example LDAP entry for use with <a href="http://directory.apache.org/">Apache Directory Server</a> should look something like this:
</p>
<p>
testuser.ldif
</p>
<pre class="brush:plain">
version: 1
dn: uid=testuser,ou=users,ou=system
objectClass: extensibleObject
objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
cn: testuser
sn: testuser
pwdAttribute: userPassword
pwdLockout: false
pwdMaxIdle: 1800
uid: testuser
unixFilePath: /testuser
userPassword:: e1NTSEF9QUJhbUQ2eHZEbk91czBFVDhzWmtpdk9MWXdSYWRzU3B0UnhlK1E9P
Q==
</pre>
<p>
Example usage when used with an embedded FTP server:
</p>
<pre class="brush:java">
private static void exampleUsage() throws FtpException {
LdapConnectionConfig config = new LdapConnectionConfig();
config.setLdapHost("localhost");
config.setLdapPort(10389);
config.setName("uid=admin,ou=system");
config.setCredentials("secret");
GenericObjectPool.Config poolConfig = new GenericObjectPool.Config();
poolConfig.maxActive = 200;
poolConfig.maxIdle = 20;
DefaultLdapConnectionFactory ldapConnectionFactory = new DefaultLdapConnectionFactory(config);
ldapConnectionFactory.setTimeOut(1000 * 60 * 3);
ValidatingPoolableLdapConnectionFactory poolableLdapConnectionFactory
= new ValidatingPoolableLdapConnectionFactory(ldapConnectionFactory);
LdapConnectionPool ldapPool = new LdapConnectionPool(poolableLdapConnectionFactory, poolConfig);
LdapConnectionTemplate ldapConnectionTemplate = new LdapConnectionTemplate(ldapPool);
ListenerFactory listenerFactory = new ListenerFactory();
listenerFactory.setPort(8021);
FtpServerFactory serverFactory = new FtpServerFactory();
serverFactory.addListener("default", listenerFactory.createListener());
serverFactory.setUserManager(new LdapUserManager(ldapConnectionTemplate, "ou=users,ou=system"));
FtpServer server = serverFactory.createServer();
server.start();
}
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/ftpserver-usermanager">ftpserver-usermanager</a></li>
</ul>
</p>
<h4>Build and Test</h4>
<p>
The project is a standard Maven project which can be built with:
</p>
<pre>
mvn clean install
</pre>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-6676456802437709502017-01-26T21:00:00.003+00:002017-02-18T21:03:35.381+00:00sctbrowser - SNOMED CT Browser with UK clinical and UK drug extensions<p>
sctbrowser is a small, simple and fast, <a href="http://www.snomed.org/snomed-ct">Snomed CT</a> Browser for viewing international and UK (and other countries?) clinical and drug extension RF2 data.
</p>
<p>
The <a href="https://github.com/adrianwalker/sctbrowser">source code</a> is available to download and modify for free, and a free to use hosted version is available online here:<br>
<a href="http://sctbrowser.uk">sctbrowser.uk</a>.
</p>
<p>
It’s features include:
<ul>
<li>A hierarchical tree browser.</li>
<li>Partial match concept id and description searching.</li>
<li>Refset list browsing.</li>
<li>Subset to refset mappings.</li>
<li>Detailed concept view including concept properties, descriptions and relationships.</li>
<li>Subset member browsing.</li>
<li>Concept member of browsing.</li>
<li>Alternative terminology mappings, such as ICD10.</li>
</ul>
</p>
<p>
sctbrowser is implemented in Java, using an HTML and AngularJS UI, with RESTful webservices and PostgreSQL RDBMS, deploying to Tomcat on Windows or Linux.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifZbamSTE1Hk3LGoRaOMFY5lUf9UqrfdXyMRG3YbHBga-mQBGIjIkTMc7u9K9_cwGaTjGnMGsp1Ae7c2afYqmFtMyOIfzyjnqzSsf5Kc4sy5XsNq8zDb8McOezjZbcEQexAhGhCBNwmnE/s1600/sctbrowser.uk-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifZbamSTE1Hk3LGoRaOMFY5lUf9UqrfdXyMRG3YbHBga-mQBGIjIkTMc7u9K9_cwGaTjGnMGsp1Ae7c2afYqmFtMyOIfzyjnqzSsf5Kc4sy5XsNq8zDb8McOezjZbcEQexAhGhCBNwmnE/s320/sctbrowser.uk-1.png" width="320" height="196" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEUKmp6jCZ-r_BEnNJCkZT5YleLpTKBTG8Oqgc-GqwahBA64q7ki7tD5QSvBRznsqadn56pl24corKKKxjiU-h77lCCUJ9GGomwq2uPvog-vQlObBuVZhmTPQXf7BawYS9b2bHAVy-TPE/s1600/sctbrowser.uk-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEUKmp6jCZ-r_BEnNJCkZT5YleLpTKBTG8Oqgc-GqwahBA64q7ki7tD5QSvBRznsqadn56pl24corKKKxjiU-h77lCCUJ9GGomwq2uPvog-vQlObBuVZhmTPQXf7BawYS9b2bHAVy-TPE/s320/sctbrowser.uk-2.png" width="320" height="196" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyeo6AFagAfR4hNZEzEG9kfm_8wwGEXFS680FE8crE4KSIufCbrtT1mod1AqXmA8eZvfzztQXyZhUwGlhQuGAFRg1zb0wjE1A8hp8_ktEj5N9VYxieh3IDQAX_bzD6tMgBGWxXZHCdrvw/s1600/sctbrowser.uk-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyeo6AFagAfR4hNZEzEG9kfm_8wwGEXFS680FE8crE4KSIufCbrtT1mod1AqXmA8eZvfzztQXyZhUwGlhQuGAFRg1zb0wjE1A8hp8_ktEj5N9VYxieh3IDQAX_bzD6tMgBGWxXZHCdrvw/s320/sctbrowser.uk-3.png" width="320" height="196" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjU6NqNR3btvy4Bw2qm1xsuX5newBHwXwg0QqMYNOBlAZoamZs5XWGGlqtHy9_y1GoIHGnyLJHUmLh2OnKMqL_a-0ENfjtOpfSet_P35IhdJtJDdcd1LTJC0KtdzritiDjsI_JUi0q2SBA/s1600/sctbrowser.uk-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjU6NqNR3btvy4Bw2qm1xsuX5newBHwXwg0QqMYNOBlAZoamZs5XWGGlqtHy9_y1GoIHGnyLJHUmLh2OnKMqL_a-0ENfjtOpfSet_P35IhdJtJDdcd1LTJC0KtdzritiDjsI_JUi0q2SBA/s320/sctbrowser.uk-4.png" width="320" height="196" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeAiw1MsOYFAHChTiPmaEV3Z1cCGFcpNk3LYpmKpfaZMJgvcCs7ct_fYpIU85qwaEtA06J7-bDoO-9bASKVIMEnN4YcRT_EZsHWeYA_dOXyD6ah7iibonk6tBFovVgoIeXuci9UjpTM_s/s1600/sctbrowser.uk-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeAiw1MsOYFAHChTiPmaEV3Z1cCGFcpNk3LYpmKpfaZMJgvcCs7ct_fYpIU85qwaEtA06J7-bDoO-9bASKVIMEnN4YcRT_EZsHWeYA_dOXyD6ah7iibonk6tBFovVgoIeXuci9UjpTM_s/s320/sctbrowser.uk-5.png" width="320" height="196" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbejkSBFiCiD9zWJhsedqNtJbKixFcjMhEJUZWihMg-shz9SAjbTQB_hOA3d788PxBvdymTjTSKXQ8XDT-jmznyDOnyHoPjLibbbiNCs0g-4WdfK8CCONZWNkUwoFC1aEgXjjZdpFW77I/s1600/sctbrowser.uk-6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbejkSBFiCiD9zWJhsedqNtJbKixFcjMhEJUZWihMg-shz9SAjbTQB_hOA3d788PxBvdymTjTSKXQ8XDT-jmznyDOnyHoPjLibbbiNCs0g-4WdfK8CCONZWNkUwoFC1aEgXjjZdpFW77I/s320/sctbrowser.uk-6.png" width="320" height="196" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrKyRBMSq9LaTzhGDHLmg5HvV_ZuIXfboBFVXtYNfmKyrHP_Zc8UtsmWfp-DE9hcn5mkjJqL-_LymbjWvJcxmOjQfJZ1zhWA5dI4fRhkY3ZFEw2bIMMa9ePdHkHliY1at-wkRX2_ncfRk/s1600/sctbrowser.uk-7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrKyRBMSq9LaTzhGDHLmg5HvV_ZuIXfboBFVXtYNfmKyrHP_Zc8UtsmWfp-DE9hcn5mkjJqL-_LymbjWvJcxmOjQfJZ1zhWA5dI4fRhkY3ZFEw2bIMMa9ePdHkHliY1at-wkRX2_ncfRk/s320/sctbrowser.uk-7.png" width="320" height="196" /></a></div>
<h4>Some favourite concepts</h4>
<p>
<a href="http://sctbrowser.uk/#!?id=~'242500003&tab=~0">242500003</a>
<br/>
<a href="http://sctbrowser.uk/#!?id=~'301327002&tab=~0">301327002</a>
<br/>
<a href="http://sctbrowser.uk/#!?id=~'8953901000001102&tab=~0">8953901000001102</a>
</p>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/sctbrowser">sctbrowser</a></li>
</ul>
</p>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-91270437792148210162016-08-07T19:27:00.000+01:002016-08-07T21:00:51.294+01:00lg4j – Java library for controlling LG TVs<p>
Inspired by <a href="https://github.com/ubaransel/lgcommander">lgcommander</a>, <a href="https://github.com/adrianwalker/lg4j">lg4j</a></li> is a Java API for controlling LG TVs via the webservice interface.
</p>
<h4>Example usage</h4>
<p>
First you need find the TV’s IP address and get an authentication key by executing:
</p>
<pre class="brush:java">
Lg4j lg4j = new Lg4j();
String ip = lg4j.discoverIpAddress();
lg4j.displayAuthenticationKey(ip);
</pre>
<p>
This will display a key on the TV in the bottom right corner:
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzjvT_wmg7A6xxCkny3WRwPZF9sqCYdrTu_-KLEsm3zv6F-Tt-PUUqALxxAcAqCaZWBbZJD43cnvzfr9lV_BVNxSL1Jy5j7mj-30mVSUCc7Py9r_dph6upzF8wwfMt0_oEHsTLFHCawp4/s1600/lgauthkey.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzjvT_wmg7A6xxCkny3WRwPZF9sqCYdrTu_-KLEsm3zv6F-Tt-PUUqALxxAcAqCaZWBbZJD43cnvzfr9lV_BVNxSL1Jy5j7mj-30mVSUCc7Py9r_dph6upzF8wwfMt0_oEHsTLFHCawp4/s1600/lgauthkey.png" /></a></div>
<p>
Next, use the authentication key to authenticate with the TV:
</p>
<pre class="brush:java">
lg4j.authenticate(ip, 674689);
</pre>
<p>
After authentication you can send commands to the TV, for example to turn down the TV one level:
</p>
<pre class="brush:java">
lg4j.sendKey(ip, KeyCodes.VOLUME_DOWN);
</pre>
<p>
Putting it all together:
</p>
<pre class="brush:java">
Lg4j lg4j = new Lg4j();
String ip = lg4j.discoverIpAddress();
lg4j.displayAuthenticationKey(ip);
int session = lg4j.authenticate(ip, 674689);
lg4j.sendKey(ip, KeyCodes.VOLUME_DOWN);
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/lg4j">lg4j</a></li>
</ul>
</p>
<h4>Build and Test</h4>
<p>
The project is a standard Maven project which can be built with:
</p>
<pre>
mvn clean install
</pre>
<p>
For the JUnit tests to pass an LG TV must be available on the network and the correct authentication key set in the unit test code. Testing was performed against a LG 42LF580V, your mileage may vary with a different model.
</p>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-36870935322121810702016-04-09T14:15:00.001+01:002016-04-09T14:24:14.925+01:00SQL Graph Database Using Continued Fractions<p>
This post is a continuation of a previous post (<a href="http://www.adrianwalker.org/2014/10/continued-fraction-database-file-system.html">Continued Fraction Database File System</a>) which used a SQL RDBMS to implement a file system tree using continued fractions. If you want to understand the maths behind the project, read that previous post first and the paper by Dan Hazel which it is based on, <a href="http://arxiv.org/pdf/0806.3115.pdf">Using rational numbers to key nested sets</a>.
</p>
<p>
In the previous post, each node had one path, and so could only represent trees. In a graph, each node can have multiple paths. Each path can be represented by a list of integers, where each integer is the index of the child beneath it's parent node, for example:
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRfRZ0HLvM61vxLe2ys6L1IF51hamj3KEyICwG99j64YsN-i6tB5MVdmvt9d98b0FG4sL1vX4olgyZfANFmbVD10IPqPSElh4uaUfPuJp6NAK4987_aBeHnzNmIJiWO-6EjZSAETUvKdA/s1600/node-paths.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;">
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRfRZ0HLvM61vxLe2ys6L1IF51hamj3KEyICwG99j64YsN-i6tB5MVdmvt9d98b0FG4sL1vX4olgyZfANFmbVD10IPqPSElh4uaUfPuJp6NAK4987_aBeHnzNmIJiWO-6EjZSAETUvKdA/s1600/node-paths.png" />
</a>
</div>
<p>
The integer list paths can be used in a continued fraction to calculate a real number value to be used as the primary key for a node path relation. So our database schema looks like this:
</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieK6Ua_Q1NSgO_4N1cTYioCHob_hpbgABenRKdo_Iyu9AwLE0FUcPltqIpHzNTc1MSqy6JeI7RixI3EPEjD9slFwVpCZusS74W4MWlo-m12TECxjwlwEyJe-jKyddcpBnXTVGZwWhqF2Q/s1600/sql-graph-database.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;">
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieK6Ua_Q1NSgO_4N1cTYioCHob_hpbgABenRKdo_Iyu9AwLE0FUcPltqIpHzNTc1MSqy6JeI7RixI3EPEjD9slFwVpCZusS74W4MWlo-m12TECxjwlwEyJe-jKyddcpBnXTVGZwWhqF2Q/s1600/sql-graph-database.png" />
</a>
</div>
<p>
This model could be extended with a properties relation to store multiple key-value pairs for each node and path.
</p>
<p>
The JPA entities for the node and path relations are:
</p>
<p>Node.java</p>
<pre class="brush:java">
package org.adrianwalker.continuedfractions.graph.entity;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import javax.persistence.Basic;
import static javax.persistence.CascadeType.ALL;
import javax.persistence.Column;
import javax.persistence.Entity;
import static javax.persistence.FetchType.LAZY;
import javax.persistence.GeneratedValue;
import static javax.persistence.GenerationType.IDENTITY;
import javax.persistence.Id;
import javax.persistence.NamedQueries;
import javax.persistence.NamedQuery;
import javax.persistence.OneToMany;
import javax.persistence.Table;
@Entity
@Table(name = "node")
@NamedQueries({
@NamedQuery(name = "tree",
query = "SELECT DISTINCT n2 "
+ "FROM Node n1, Node n2, NodePath np1, NodePath np2 "
+ "WHERE n1.id = np1.node.id "
+ "AND (np2.id >= np1.id AND np2.id < np1.sid) "
+ "AND n2.id = np2.node.id "
+ "AND n1.id = :id "
+ "ORDER BY n2.id"
),
@NamedQuery(name = "parents",
query = "SELECT DISTINCT n2 "
+ "FROM Node n1, Node n2, NodePath np1, NodePath np2 "
+ "WHERE n1.id = np1.node.id "
+ "AND n2.id = np2.node.id "
+ "AND (np1.id > np2.id AND np1.id < np2.sid) "
+ "AND np2.hops = np1.hops - 1 "
+ "AND n1.id = :id "
+ "ORDER BY np2.id"
),
@NamedQuery(name = "children",
query = "SELECT DISTINCT n2 "
+ "FROM Node n1, Node n2, NodePath np1, NodePath np2 "
+ "WHERE n1.id = np1.node.id "
+ "AND n2.id = np2.node.id "
+ "AND (np2.id > np1.id AND np2.id < np1.sid) "
+ "AND np2.hops = np1.hops + 1 "
+ "AND n1.id = :id "
+ "ORDER BY np2.id"
)})
public class Node implements Serializable {
private static final long serialVersionUID = 1L;
@Id
@GeneratedValue(strategy = IDENTITY)
@Basic(optional = false)
@Column(name = "id", nullable = false)
private Long id;
@Basic(optional = false)
@Column(name = "name", nullable = false)
private String name;
@OneToMany(fetch = LAZY, cascade = ALL, orphanRemoval = true, mappedBy = "node")
private List<NodePath> nodePaths;
public Node() {
}
public Long getId() {
return id;
}
public void setId(final Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(final String name) {
this.name = name;
}
public List<NodePath> getNodePaths() {
if (null == nodePaths) {
nodePaths = new ArrayList<>();
}
return nodePaths;
}
public void setNodePaths(final List<NodePath> nodePaths) {
this.nodePaths = nodePaths;
}
@Override
public int hashCode() {
int hash = 5;
hash = 89 * hash + Objects.hashCode(this.id);
return hash;
}
@Override
public boolean equals(final Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
return Objects.equals(this.id, ((Node) obj).id);
}
@Override
public String toString() {
return "Node{" + "id=" + id + ", name=" + name + '}';
}
}
</pre>
<p>NodePath.java</p>
<pre class="brush:java">
package org.adrianwalker.continuedfractions.graph.entity;
import java.io.Serializable;
import java.math.BigDecimal;
import java.util.Objects;
import javax.persistence.Basic;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Index;
import javax.persistence.JoinColumn;
import javax.persistence.ManyToOne;
import javax.persistence.Table;
import javax.persistence.UniqueConstraint;
import org.adrianwalker.continuedfractions.Fraction;
@Entity
@Table(name = "node_path",
uniqueConstraints = {
@UniqueConstraint(columnNames = {"id", "sid"})
},
indexes = {
@Index(columnList = "hops", unique = false)
})
public class NodePath implements Serializable {
private static final long serialVersionUID = 1L;
@Id
@Basic(optional = false)
@Column(name = "id", nullable = false, precision = (Fraction.SCALE * 2) - 1, scale = Fraction.SCALE)
private BigDecimal id;
@Basic(optional = false)
@Column(name = "sid", nullable = false, precision = (Fraction.SCALE * 2) - 1, scale = Fraction.SCALE)
private BigDecimal sid;
@Basic(optional = false)
@Column(name = "hops", nullable = false)
private Integer hops;
@ManyToOne(optional = false)
@JoinColumn(name = "node_id", referencedColumnName = "id", nullable = false)
private Node node;
public NodePath() {
}
public BigDecimal getId() {
return id;
}
public void setId(final BigDecimal id) {
this.id = id;
}
public BigDecimal getSid() {
return sid;
}
public void setSid(final BigDecimal sid) {
this.sid = sid;
}
public Integer getHops() {
return hops;
}
public void setHops(final Integer hops) {
this.hops = hops;
}
public Node getNode() {
return node;
}
public void setNode(final Node node) {
this.node = node;
}
@Override
public int hashCode() {
int hash = 3;
hash = 41 * hash + Objects.hashCode(this.id);
return hash;
}
@Override
public boolean equals(final Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
return Objects.equals(this.id, ((NodePath) obj).id);
}
@Override
public String toString() {
return "NodePath{" + "id=" + id + ", sid=" + sid + ", hops=" + hops + '}';
}
</pre>
And a simple JPA controller:
<p>Controller.java</p>
<pre class="brush:java">
package org.adrianwalker.continuedfractions.graph.controller;
import java.util.List;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Query;
import org.adrianwalker.continuedfractions.graph.entity.Node;
import org.adrianwalker.continuedfractions.graph.entity.NodePath;
public final class Controller {
private final EntityManagerFactory emf;
public Controller(final EntityManagerFactory emf) {
this.emf = emf;
}
public EntityManager getEntityManager() {
return emf.createEntityManager();
}
public Node create(final Node node) throws Exception {
EntityManager em = getEntityManager();
try {
begin(em);
em.persist(node);
end(em);
} finally {
em.close();
}
return node;
}
public NodePath create(final NodePath nodePath) throws Exception {
EntityManager em = getEntityManager();
try {
begin(em);
em.persist(nodePath);
end(em);
} finally {
em.close();
}
return nodePath;
}
public List<Node> tree(final Node node) {
EntityManager em = getEntityManager();
Query tree = em.createNamedQuery("tree");
tree.setParameter("id", node.getId());
try {
return tree.getResultList();
} finally {
em.close();
}
}
public List<Node> parents(final Node node) {
EntityManager em = getEntityManager();
Query children = em.createNamedQuery("parents");
children.setParameter("id", node.getId());
try {
return children.getResultList();
} finally {
em.close();
}
}
public List<Node> children(final Node node) {
EntityManager em = getEntityManager();
Query children = em.createNamedQuery("children");
children.setParameter("id", node.getId());
try {
return children.getResultList();
} finally {
em.close();
}
}
private void begin(final EntityManager em) {
em.getTransaction().begin();
}
private void end(final EntityManager em) {
em.getTransaction().commit();
}
}
</pre>
<p>
The Graph class calls the controller to persist Node and NodePath entities. It has methods to add nodes and node paths, to list all the nodes beneath a given node, and methods to get the immediate parents and children of a given node:
</p>
<p>Graph.java</p>
<pre class="brush:java">
package org.adrianwalker.continuedfractions.graph;
import java.math.BigDecimal;
import java.util.List;
import static org.adrianwalker.continuedfractions.Fraction.decimal;
import org.adrianwalker.continuedfractions.graph.controller.Controller;
import org.adrianwalker.continuedfractions.graph.entity.Node;
import org.adrianwalker.continuedfractions.graph.entity.NodePath;
import static org.adrianwalker.continuedfractions.graph.Path.sibling;
import static org.adrianwalker.continuedfractions.Fraction.fraction;
public final class Graph {
private final Controller controller;
public Graph(final Controller controller) {
this.controller = controller;
}
public Node addNode(final String name) throws Exception {
Node node = new Node();
node.setName(name);
return controller.create(node);
}
public NodePath addPath(final Node node, final int... path) throws Exception {
NodePath nodePath = new NodePath();
int[] nvDv = fraction(path);
BigDecimal id = decimal(nvDv);
int[] snvSdv = fraction(sibling(path));
BigDecimal sid = decimal(snvSdv);
nodePath.setId(id);
nodePath.setSid(sid);
nodePath.setHops(path.length);
nodePath.setNode(node);
return controller.create(nodePath);
}
public List<Node> tree(final Node node) {
return controller.tree(node);
}
public List<Node> parents(final Node node) {
return controller.parents(node);
}
public List<Node> children(final Node node) {
return controller.children(node);
}
}
</pre>
<p>
Below is a unit test example of how to use the Graph class to create the graph in the image above:
</p>
<p>GraphTest.java</p>
<pre class="brush:java">
package org.adrianwalker.continuedfractions.graph;
import java.util.Arrays;
import static java.util.stream.Collectors.toList;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import org.adrianwalker.continuedfractions.graph.controller.Controller;
import org.adrianwalker.continuedfractions.graph.entity.Node;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Assert;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
public class GraphTest {
private static EntityManagerFactory emf;
public GraphTest() {
}
@BeforeClass
public static void setUpClass() {
emf = Persistence.createEntityManagerFactory("graph");
}
@AfterClass
public static void tearDownClass() {
emf.close();
}
@Before
public void setUp() {
}
@After
public void tearDown() {
}
/*
A
/|\
B | C
\|/
D
|
E
*/
@Test
public void testGraph() throws Exception {
Controller nc = new Controller(emf);
Graph hierarchy = new Graph(nc);
Node a = hierarchy.addNode("A");
Node b = hierarchy.addNode("B");
Node c = hierarchy.addNode("C");
Node d = hierarchy.addNode("D");
Node e = hierarchy.addNode("E");
hierarchy.addPath(a, 1);
hierarchy.addPath(b, 1, 1);
hierarchy.addPath(c, 1, 2);
hierarchy.addPath(d, 1, 3);
hierarchy.addPath(d, 1, 1, 1);
hierarchy.addPath(d, 1, 2, 1);
hierarchy.addPath(e, 1, 1, 1, 1);
hierarchy.addPath(e, 1, 2, 1, 1);
hierarchy.addPath(e, 1, 3, 1);
// trees
Assert.assertEquals(Arrays.asList(new String[]{"A", "B", "C", "D", "E"}),
hierarchy.tree(a).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"B", "D", "E"}),
hierarchy.tree(b).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"C", "D", "E"}),
hierarchy.tree(c).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"D", "E"}),
hierarchy.tree(d).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"E"}),
hierarchy.tree(e).stream()
.map(node -> node.getName())
.collect(toList()));
// children
Assert.assertEquals(Arrays.asList(new String[]{"B", "C", "D"}),
hierarchy.children(a).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"D"}),
hierarchy.children(b).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"D"}),
hierarchy.children(c).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"E"}),
hierarchy.children(d).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{}),
hierarchy.children(e).stream()
.map(node -> node.getName())
.collect(toList()));
// parents
Assert.assertEquals(Arrays.asList(new String[]{}),
hierarchy.parents(a).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"A"}),
hierarchy.parents(b).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"A"}),
hierarchy.parents(c).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"A", "B", "C"}),
hierarchy.parents(d).stream()
.map(node -> node.getName())
.collect(toList()));
Assert.assertEquals(Arrays.asList(new String[]{"D"}),
hierarchy.parents(e).stream()
.map(node -> node.getName())
.collect(toList()));
}
}
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/continued-fractions-graph">continued-fractions-graph</a></li>
</ul>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-9312711263992358592016-04-09T11:59:00.001+01:002016-04-09T12:12:02.879+01:00ANTLR Dynamic Runtime Tokens and Rules<p>
<a href="http://www.antlr.org/">ANTLR</a> lexer tokens and parser rules are normally coded into the grammar and not modifiable during the codes execution, but I need to add lexer rule tokens and enable or disable parser rules at runtime. So here's an example of how you might want to do that.
<p>
<p>
First, two classes to hold lexer tokens and the enabled/disabled status of parser rules:
</p>
<p>LexerLookup.java</p>
<pre class="brush:java">
package org.adrianwalker.antlr.dynamicrules;
import static java.lang.String.format;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.logging.Logger;
import static org.adrianwalker.antlr.dynamicrules.DynamicRulesParser.ruleNames;
import org.antlr.v4.runtime.CharStream;
public enum LexerLookup {
INSTANCE;
private static final Logger LOGGER = Logger.getLogger(LexerLookup.class.getName());
private static final Comparator<String> LONGEST_FIRST = (s1, s2) -> s2.length() - s1.length();
private final Map<Integer, Set<String>> tokenIdTermsMap;
private LexerLookup() {
tokenIdTermsMap = new HashMap<>();
}
public void put(final int tokenId, final List<String> tokens) {
if (null == tokens) {
throw new IllegalArgumentException("Illegal argument, tokens must be not null");
}
tokens.removeIf(Objects::isNull);
Collections.sort(tokens, LONGEST_FIRST);
LinkedHashSet tokenSet = new LinkedHashSet(tokens);
LOGGER.info(format("tokens '%s' %s\n", ruleNames[tokenId - 1], tokenSet));
this.tokenIdTermsMap.put(tokenId, tokenSet);
}
public boolean contains(final int tokenId, final CharStream input) {
boolean contains = false;
if (!tokenIdTermsMap.containsKey(tokenId)) {
return contains;
}
Set<String> terms = tokenIdTermsMap.get(tokenId);
for (String term : terms) {
contains = ahead(term, input);
if (contains) {
LOGGER.info(format("contains '%s' ('%s')\n", term, ruleNames[tokenId - 1]));
break;
}
}
return contains;
}
private boolean ahead(final String word, final CharStream input) {
for (int i = 0; i < word.length(); i++) {
char wordChar = word.charAt(i);
int inputChar = input.LA(i + 1);
if (inputChar != wordChar) {
return false;
}
}
input.seek(input.index() + word.length() - 1);
return true;
}
}
</pre>
<p>ParserLookup.java</p>
<pre class="brush:java">
package org.adrianwalker.antlr.dynamicrules;
import static java.lang.String.format;
import java.util.HashMap;
import java.util.Map;
import java.util.logging.Logger;
public enum ParserLookup {
INSTANCE;
private static final Logger LOGGER = Logger.getLogger(ParserLookup.class.getName());
private final Map<Integer, Boolean> ruleIdEnabledMap;
private ParserLookup() {
ruleIdEnabledMap = new HashMap<>();
}
public void put(final int ruleId, final boolean enabled) {
LOGGER.info(format("ruleId = %s, enabled = %s\n", ruleId, enabled));
this.ruleIdEnabledMap.put(ruleId, enabled);
}
public boolean enabled(final int ruleId) {
return ruleIdEnabledMap.getOrDefault(ruleId, true);
}
}
</pre>
These two classes are used by the grammar to assign values to lexer rules and to enable or disable parser rules like this:
<p>DynamicRules.g4</p>
<pre class="brush:plain">
grammar DynamicRules;
@lexer::header {
import org.adrianwalker.antlr.dynamicrules.LexerLookup;
}
@lexer::members {
public static final LexerLookup LOOKUP = LexerLookup.INSTANCE;
}
@parser::header {
import org.adrianwalker.antlr.dynamicrules.ParserLookup;
}
@parser::members {
public static final ParserLookup LOOKUP = ParserLookup.INSTANCE;
}
// Parser Rules
sentence : ({LOOKUP.enabled(RULE_words)}? words) FULL_STOP ;
words : WORD (WS WORD)+ ;
// Lexer Rules
WORD : {LOOKUP.contains(WORD, _input)}? . ;
FULL_STOP : '.' ;
WS : [ \t\r\n]+ ;
OTHER : . ;
</pre>
<p>
The parser and lexer generated from the ANLTR grammer can be used with the lexer and parser lookup classes to set token values and enable and disable rules:
<p>
<p>SentenceParser.java<p>
<pre class="brush:java">
package org.adrianwalker.antlr.dynamicrules;
import java.util.List;
import org.antlr.v4.runtime.ANTLRInputStream;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.RecognitionException;
public final class SentenceParser {
public SentenceParser() {
}
public void setWords(final List<String> words) {
LexerLookup.INSTANCE.put(DynamicRulesLexer.WORD, words);
}
public void enableWords(final boolean enabled) {
ParserLookup.INSTANCE.put(DynamicRulesParser.RULE_words, enabled);
}
public Result parse(final String term) throws RecognitionException {
DynamicRulesLexer lexer = new DynamicRulesLexer(new ANTLRInputStream(term));
CommonTokenStream tokens = new CommonTokenStream(lexer);
DynamicRulesParser parser = new DynamicRulesParser(tokens);
return new Result(parser.sentence().getText(), parser.getNumberOfSyntaxErrors());
}
public static final class Result {
private String text;
private int numberOfSyntaxErrors;
public Result(final String text, final int numberOfSyntaxErrors) {
this.text = text;
this.numberOfSyntaxErrors = numberOfSyntaxErrors;
}
public String getText() {
return text;
}
public void setText(final String text) {
this.text = text;
}
public int getNumberOfSyntaxErrors() {
return numberOfSyntaxErrors;
}
public void setNumberOfSyntaxErrors(final int numberOfSyntaxErrors) {
this.numberOfSyntaxErrors = numberOfSyntaxErrors;
}
}
}
</pre>
<p>
Some unit tests for example usage:
<p>
<p>SentenceParserTest.java</p>
<pre class="brush:java">
package org.adrianwalker.antlr.dynamicrules;
import static java.util.Arrays.asList;
import org.adrianwalker.antlr.dynamicrules.SentenceParser.Result;
import org.junit.Assert;
import org.junit.Test;
public class SentenceParserTest {
@Test
public void testValid() {
SentenceParser parser = new SentenceParser();
parser.enableWords(true);
parser.setWords(asList(new String[]{
"on", "cat", "mat", "sat", "the"
}));
Result result = parser.parse("the cat sat on the mat.");
Assert.assertEquals("the cat sat on the mat.", result.getText());
Assert.assertEquals(0, result.getNumberOfSyntaxErrors());
}
@Test
public void testDisabledRule() {
SentenceParser parser = new SentenceParser();
parser.enableWords(false);
parser.setWords(asList(new String[]{
"on", "cat", "mat", "sat", "the"
}));
Result result = parser.parse("the cat sat on the mat.");
Assert.assertEquals(1, result.getNumberOfSyntaxErrors());
}
@Test
public void testInvalidWords() {
SentenceParser parser = new SentenceParser();
parser.enableWords(false);
parser.setWords(asList(new String[]{
"on", "cat", "mat", "sat", "the"
}));
Result result = parser.parse("INVALID");
Assert.assertEquals(1, result.getNumberOfSyntaxErrors());
}
@Test
public void testUpdateWordsAndDisableRule() {
SentenceParser parser = new SentenceParser();
parser.enableWords(true);
parser.setWords(asList(new String[]{
"the"
}));
Result result = parser.parse("the cat sat on the mat.");
Assert.assertEquals(1, result.getNumberOfSyntaxErrors());
parser.setWords(asList(new String[]{
"on", "cat", "mat", "sat", "the"
}));
result = parser.parse("the cat sat on the mat.");
Assert.assertEquals(0, result.getNumberOfSyntaxErrors());
parser.enableWords(false);
result = parser.parse("the cat sat on the mat.");
Assert.assertEquals(1, result.getNumberOfSyntaxErrors());
}
}
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/antlr-dynamic-rules">antlr-dynamic-rules</a></li>
</ul>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-64479058458012423242016-04-09T11:14:00.002+01:002016-08-07T21:03:29.043+01:00Getting Started With AngularJS<p>
I've only done a couple of projects with <a href="https://angularjs.org/">AngularJS</a>, but the hardest part of using Angular each time has been just getting up and running. After that it's been pretty plain sailing. So here is a starting point to get me (and you?) hitting the ground running next time.
<p>
<p>
The project uses <a href="https://docs.angularjs.org/api/ngRoute">ngRoute</a> and <a href="https://docs.angularjs.org/api/ngResource">ngResource</a> to create a single page app to call a REST web service (<a href="http://jsonplaceholder.typicode.com">http://jsonplaceholder.typicode.com</a>).
</p>
<p>
All the JavaScript dependencies are hosted on <a href="https://www.cloudflare.com/">CloudFlair</a> so there is nothing extra to download.
</p>
<p>
To turn this into a real project, you're going to ant to refactor the code a little, pull controllers and and services into their own files etc, but like I said, the code is just a starting point that works.
</p>
<p>angularjs-demo.html</p>
<pre class="brush:html">
<!DOCTYPE html>
<html>
<head>
<title>Angular JS Demo</title>
<link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/css/bootstrap.css" rel="stylesheet">
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.3/jquery.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.3/angular.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.3/angular-route.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular-resource/1.5.3/angular-resource.js"></script>
<script src="angularjs-demo.js"></script>
</head>
<body ng-app="angularjs-demo">
<div class="collapse navbar-collapse" role="navigation">
<ul class="nav navbar-nav">
<li><a href="#/posts">Posts</a></li>
</ul>
</div>
<div class="container">
<div class="panel panel-default">
<div class="panel-body">
<div ng-view></div>
</div>
</div>
</div>
</body>
</html>
</pre>
<p>posts.html<p>
<pre class="brush:html">
<!DOCTYPE html>
<html>
<head>
<title>Angular JS Demo - Posts</title>
</head>
<body>
<div>
<h4>Get Post by ID</h4>
Post ID: <input type="text" ng-model="id" />
<button ng-click="query(id)" type="button">Get Post</button>
</div>
<div>
<h4>Post JSON</h4>
<pre>{{postsResponse| json}}</pre>
</div>
</body>
</html>
</pre>
<p>angularjs-demo.js</p>
<pre class="brush:javascript">
angular
.module('angularjs-demo', [
'ngRoute',
'ngResource'
])
.config(function ($routeProvider) {
$routeProvider
.when('/posts', {
templateUrl: 'posts.html',
controller: 'PostsCtrl'
})
.otherwise({
redirectTo: '/posts'
});
})
.factory('Posts', function ($resource) {
var POSTS_URL = 'http://jsonplaceholder.typicode.com/posts/:id';
return $resource(POSTS_URL, {},
{get: {method: 'GET'}}
);
})
.controller('PostsCtrl', function ($scope, $log, Posts) {
function query(id) {
return Posts.get({id: id},
function (data) {
return data;
}, function (err) {
$log.error(JSON.stringify(err));
});
}
$scope.query = function (id) {
$log.info("id = " + id);
$scope.postsResponse = query(id);
};
});
</pre>
<h4>Source Code</h4>
<p>
<ul>
<li>Code available in GitHub - <a href="https://github.com/adrianwalker/angularjs-demo">angularjs-demo</a></li>
</ul>
</p>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-3451413587413375915.post-69518599301798626282015-08-17T17:23:00.003+01:002015-08-17T17:52:15.934+01:00The short life and fast times of a beekeeper<p>
A couple of years ago I decided I wanted to try beekeeping - nothing big - just a little hive in the back garden. I wasn't bothered about harvesting honey, or making stuff from wax, it was just that I read so much about bee populations in decline and I thought I could build a hive and give some bee's a place to stay, you know, do my bit to help them along and in return they could pollinate my strawberries.
</p>
<p>
Also I find bees pretty fascinating – they are the ultimate in natural decentralized decision making. They are like a distributed computing system made up of thousands of small multi-purpose nodes which on their own can't achieve much, but when they communicate and work together amazing things can be made possible.
</p>
<p>
So I downloaded these top-bar hive plans and got to work:
<a href="http://www.biobees.com/build-a-beehive-free-plans.php">http://www.biobees.com/build-a-beehive-free-plans.php</a>
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimFBlSH8ox6nY3iqVLfWjVZpWxc19gaJYKjdr32ng4gHdm0D3g6OQfLjm_IzKXujrtCkA6VdorO34YJCF5NBwinCLsrj3zkgKJGg4maCTsMnoxs7YqzmMj_39DfOCwi_wWinSjbLwq4is/s1600/hive1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimFBlSH8ox6nY3iqVLfWjVZpWxc19gaJYKjdr32ng4gHdm0D3g6OQfLjm_IzKXujrtCkA6VdorO34YJCF5NBwinCLsrj3zkgKJGg4maCTsMnoxs7YqzmMj_39DfOCwi_wWinSjbLwq4is/s320/hive1.jpg" /></a>
<p>Newly built hive, not quite finished</p>
</div>
<p>
After the hive was done I read everything I could get my hands on about bees, and specifically how to attract them. And then waited. And read. And waited. And waited and read.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7Do4STXJa13i25An8u59LHlXbbr56JdLMNmp_PtknVG68eIogrNFGIt5gGJLaJJ6YW5Pkm1rRr6GHty4uKH5JTwkegtKlhbdiYv20zWubKwzFnhQaSLvHfQwWr0CP1lFfQ1Dfvcgolhk/s1600/hive2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7Do4STXJa13i25An8u59LHlXbbr56JdLMNmp_PtknVG68eIogrNFGIt5gGJLaJJ6YW5Pkm1rRr6GHty4uKH5JTwkegtKlhbdiYv20zWubKwzFnhQaSLvHfQwWr0CP1lFfQ1Dfvcgolhk/s320/hive2.jpg" /></a>
<p>Hive today, completed with felt roof, stained wood, bottom board and entrance holes.
</div>
<p>
For some reason I thought it would be like 'Field Of Bee Dreams' – build it and they will come. They fucking didn't.
</p>
<p>
This year I did the same as the previous two spring/summers to try and attract a swarm. Starting in May, every couple of weeks:
<ul>
<li>
rubbed the inside of the hive and entrance to the hive with pure bee's wax
</li>
<li>
scattered drops of lemon grass inside the hive and on the tops of the top bars
</li>
<li>
blobbed some honey inside the hive, on the bottom board, to try and drum up some interest
</li>
</ul>
</p>
<p>
When it got to the start of July I thought that was it for attracting a swarm for this year, and thought I should probably save up and just buy a nuc next year. Then in the second week of July a swarm moved in.
</p>
<p>
I was over the moon!
</p>
<p>
I thought they might be hungry – but didn't want to disturb them just yet, so I made a jar feeder with 1:1 sugar water and hung it on the end of the hive.
</p>
<p>
After a week the bees had shown absolutely no interest in the sugar water, but they seemed busy going back and forth – in and out of the hive. I decided to open up the hive and see what I had.
</p>
<p>
I had attracted the smallest swarm of bees ever. They were in a cluster on the front inside wall of the hive, above the entrance holes, in a ball slightly larger than a tennis ball. Aren't swarms meat to be an amazing sight to behold, comprised of thousands of bees all moving into a new home?
</p>
<p>
They had made no attempt to start building comb either and seemed to be happy just sat there in a cluster. I guess with the swarm being attracted so late in the season and it being so small that it must be a cast. Since my new colony was so tiny and they didn't want sugar water, I ordered some Ambrosia Fondant from a near by bee keeping supplier: <a href="http://www.abelo.co.uk/shop/apifonda-2-5kg/">http://www.abelo.co.uk/shop/apifonda-2-5kg/</a>
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieHSkJ4IYE-xwNnIAtwFOl-wWuZAXzKZUkePMTuwXkoDxjc1kIt2duDAofLY1kau0BjRj0bOMHUj5MpC4vhvZUI3uFprhP0DuWpVfKilPSLY6WsKKRTdEBadtdTEMqnUZ91nWD8EDpXmg/s1600/ambrosia.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieHSkJ4IYE-xwNnIAtwFOl-wWuZAXzKZUkePMTuwXkoDxjc1kIt2duDAofLY1kau0BjRj0bOMHUj5MpC4vhvZUI3uFprhP0DuWpVfKilPSLY6WsKKRTdEBadtdTEMqnUZ91nWD8EDpXmg/s320/ambrosia.jpg" /></a>
<p>Ambrosia feed paste</p>
</div>
<p>
When the fondant arrived, I did another hive inspection. I was expecting to see some comb drawn as it had been two weeks since they had moved in and swarms are meant to be notorious for being fast builders I thought?
</p>
<p>
They had built a couple of inches square of comb on one bar. This was turning out to be nothing like what I had read about swarms.
</p>
<p>
I figure because of the bad weather and lack of comb for honey stores, they really must be hungry now. I cut some holes in the bag of fondant, hung it from one of the top bars, and placed it right next the cluster so the bees didn't have to go too far and get too cold to reach it.
</p>
<p>
Because I'd opened the hive twice in two weeks, I decided I should probably leave the bees alone for a while now that they have started building comb and have a massive bag of food if they need it.
</p>
<p>
Fast-forward two weeks – For two days now there has been been a massive drop off in hive activity. I might see a bee going in or out of the hive every ten minutes. But then again, why would they need to be out and about? Maybe they are keeping warm, eating the fondant, building comb and just popping out now and again to get water and pollen, surely they have everything they need to get on with comb building and brood rearing right?
</p>
<p>
I thought I'd best check.
</p>
<p>
I opened the hive today to find about a dozen worker bees in there - that is all.
</p>
<p>
They had eaten some fondant, and built a tiny bit more comb, and even tried to rear some brood - half of which is black and dead in the cells.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF8-YosdgOgm8h0Al9wyc_pu9Jd3CH3z7IW_5aiiB0d4MPZ4n2HlbdO-1d_NKVg1h5qXpVmWR6_NevpISCfnikdC_RfF_ix2gAD046RndYdIprnOhe64XAYaYO_pJQsWpGL_DbZEJ5AYc/s1600/comb1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF8-YosdgOgm8h0Al9wyc_pu9Jd3CH3z7IW_5aiiB0d4MPZ4n2HlbdO-1d_NKVg1h5qXpVmWR6_NevpISCfnikdC_RfF_ix2gAD046RndYdIprnOhe64XAYaYO_pJQsWpGL_DbZEJ5AYc/s320/comb1.jpg" /></a>
<p>First comb with dead brood, and what looks like some attempts at queen cells.</p>
</div>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzAQCWAogUoy1fbbHFiUU0xudhkGgytA0LckcWTJ5nYKrZAtcUB5C65QS8Ns_hTMHGzWewtI8nSfIvQBw6lqKM3q2pyEWfp2vWwxuJFiajdF6dBLz0dqAN-V0NNPVpXR33DBsaSPmRIoQ/s1600/comb2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzAQCWAogUoy1fbbHFiUU0xudhkGgytA0LckcWTJ5nYKrZAtcUB5C65QS8Ns_hTMHGzWewtI8nSfIvQBw6lqKM3q2pyEWfp2vWwxuJFiajdF6dBLz0dqAN-V0NNPVpXR33DBsaSPmRIoQ/s320/comb2.jpg" /></a>
<p>Second and final comb, barely begun.
</div>
<p>
Oddly there are no dead bees on the floor of the hive, it just looks abandoned, apart from the dozen or so which were left behind.
</p>
<p>
I don't know what went wrong really, I know the swarm was weak to start with, but I thought with some attention and care I would be able to help them become stronger and even get them through the winter.
</p>
<p>
It's all so disappointing. I only got to spend one month as a beekeeper, I guess I'll clean out the hive when the last bee leaves and try to do a better job next year.
</p>Unknownnoreply@blogger.com