5 years ago, if you were looking for a free photo management website you could host yourself, Koken was a great option. I started with Koken for a photography client in 2014, and decided upon it based on the great documentation and very easy theme development.
Note: I would highly suggest against anyone setting up a new Koken site now. I'm not even sure if it's possible! This is for people maintaining their existing sites.
Koken's public beta was released in Early 2013, and got fairly popular pretty quick. It's creator Todd Dominey nailed what photographers wanted and needed from a self-hosted CMS. Although it became stable and fairly well-supported, it never made it to 1.0.
Fast forward a couple years and in 2015 Koken is bought by NetObjects, a software company which in the 90s had success with a desktop site-builder.
NetObjects continued to update Koken for around 2 years (although focusing on premium functionality), with the last release of v0.22.24 in August 2017. As far as I am aware, there has been no further development of Koken since then. The help centre and social media was active for about another year before they also had no further updates.
As of 2020 Koken still functions, but has some major issues. The store and documentation went offline sometime around the end of 2019 which obviously makes development harder and prevents the easy installation of themes, plugins, and even causes issues with logins on some older versions.
There was a Community Koken Forum called Koken Community, but as Koken died it went not long after, understandably given the lack of any official support.
Every few weeks I get a message on twitter or an email from someone asking for help with their Koken site. This is because I have the 'honour' of being the last tweet on Koken's twitter @koken. In 2017 I developed a theme for Koken for the previously mentioned client called Monolith. Over the end of 2017 and early 2018 I refined this and released it onto GitHub open-sourced under a GPL-v3.0 license. At this point I was unaware of the problems going on with Koken and perhaps had I known that client would be on a different platform.
Since 2017 I have been maintaining a Koken site, and have encountered a few problems that I am consistently asked about. These are my fixes for them, I hope they help!
Note these are all made on v0.22.24 and may differ for previous versions.
When using Koken with PHP 7+ (confirmed on PHP 7.3 and 7.4) and visiting /admin/, you may get a red error box appear with "Cannot connect to the API" without any further error message. Usually this appears with a database error etc, but this is code related. To fix it:
In your installation, find /app/database/DB_Driver.php
and take a look at line 1018 and you should see something like this:
else
{
$args = (func_num_args() > 1) ? array_splice(func_get_args(), 1) : null;
if (is_null($args))
{
return call_user_func($function);
}
else
{
return call_user_func_array($function, $args);
}
}
Now, replace line 1028 (the $args declaration) with the following two lines:
$func_args = func_get_args();
$args = (func_num_args() > 1) ? array_splice($func_args, 1) : null;
The code should now look like:
else
{
$func_args = func_get_args();
$args = (func_num_args() > 1) ? array_splice($func_args, 1) : null;
if (is_null($args))
{
return call_user_func($function);
}
else
{
return call_user_func_array($function, $args);
}
}
You may not notice this immediately if you have images cached, but if you cleared the cache or uploaded a new image you may find it is not rendered. This can also occur in PHP 7+ (confirmed with 7.3 and 7.4) and is another easy fix.
Find /i.php
in the root of your installation. On lines 13 and 14 there is the following:
require $root . '/app/koken/Shutter/Shutter.php';
require $root . '/app/koken/Utils/KokenAPI.php';
Replace those lines with:
require_once $root . '/app/koken/Shutter/Shutter.php';
require_once $root . '/app/koken/Utils/KokenAPI.php';
You also need to open /app/koken/Shutter/Shutter.php
and on line 274 replace the following:
include dirname(__DIR__) . '/Utils/KokenAPI.php';
with:
include_once dirname(__DIR__) . '/Utils/KokenAPI.php';
First try to enter a wrong password and click the "Forgot Password" link that appears in the bottom right. This is the easiest way on more recent version of Koken
Unfortunately however it seems that previous versions relied upon store.koken.me in order to offer forgotten password functionality. We can still reset the password, but it's a bit more manual.
You need access to your koken database for this, whether than be through phpmyadmin, another database management tool, or mysql on the command line. I won't bore with the exact commands/clicks required for each, just the general process.
/storage/configuration/database.php
;koken_users
table. This should have only one entry, your user with associated email etc;internal_id
for the user;If you haven't gathered from the rest of this post, unfortunately Koken is dead. I will continue to maintain a Koken site as my client cannot afford the cost of a rebuild and relies upon the Lightroom integration, something I haven't seen anywhere else. I would however say you shouldn't be setting up any new Koken websites, and if you still have one you should be seriously looking at alternatives.
There is a change.org petition calling on NetObjects to open-source Koken but I am not hopeful. They are a commercial software company, and most software companies will cling to thier code to the end.
A couple of people have asked me about alternatives now, so I've included a list below. Unfortunately there's nothing quite like it Koken but hopefully one might fill your needs
If you have any further issues with Koken then feel free to leave a comment and I'd be happy to help.
As a freelance developer I am more or less obligated to suggest that if you have the money, the best solution you will get will be through a web developer or development agency. This can be the design you want and function exactly as you need. If you're interested in my services as a developer or want some throughts on what you need, feel free to get in touch.
There are hosted photography portfolio services out there including:
I haven't used any of these platforms but I have heard they are reliable and easy-to-use and not too expensive.
For a bit more work you could use a hosted website builder like Squarespace, Wix, or WordPress.com. This will give you a bit more flexibility than a platform designed to fill a niche.
As much as WordPress can have a bad reputation, if you consider themes and plugins carefully then you can get a great looking and performing WordPress site for photography.
It's worth looking for themes that are designed for photography so functionality like EXIF data, lightboxes, and copy protection is included. I don't have any examples but there are several free ones on the official theme directory and many options in commercial theme directories like ThemeForest.
If you're fairly technically minded, there are also some great programs designed to generate a static website from your content and once you get used to them are really quick and simple.
One I've come across is called "Prosopopee" (github.com/Psycojoker/prosopopee/) and is designed for photography websites and features everything you'd need. It's a bit more involved to publish content as it's done with text files rather than a GUI or Lightroom integration, and you'll probably initially need a developer to make it look how you'd like but that wouldn't be too hard to come across.
Not focused on Photography, but again for the more technical 11ty (11ty.dev) is a static site generator that could absolutely work as a brilliant image gallery. You'd also probably need a developer to get the initial site going and perhaps integrate it with a headless CMS, but with a bit of extra work you can get a system that's a lot more flexible and resilient.
If I were to set up a new site for a photographer, this is probably the direction I'd go down. I've been burned by Koken dying in only a few years, a static site and static site generator will be around a lot longer than that.
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Fixes for Koken Problems appeared first on alistairshepherd.uk.
]]>It's a big change from Nuxt, Vue and Webpack to doing pretty much everything myself with 11ty (Eleventy) and gulp—I love it. More on that in a future post however, today is about the star of the show—the parallax landscape you see at the top of the page.
If you're the type who wants to dive straight into the code, here's a CodePen - go and have a play!
For those still with me, let's go through it.
Note: I'm writing JavaScript in ES6 and CSS in SCSS. I compile my code anyway so this makes it easier for me to work with.
If you recognise the art-style, it's super inspired by the game Firewatch. Firewatch is a 'walking simulator' game that came out in 2016 and people loved its art style. Featuring a bright, layered landscape it inspired many, myself included. For several years the wallpaper of my phone changed between these wallpapers based on time and weather.
When I was planning my new site, I decided to centre it on this art style. I wanted it to feel interactive, and parallax felt like a natural way to do that.
My wonderful sister Becci Shepherd produced the landscape, and sent me a raster PNG for each layer. Although I experimented with masking, it's browser support isn't quite there. SVGs were the obvious choice.
To convert to vector I used Vector Magic Desktop Edition. It does a brilliant job of anything you throw at it, and is the best raster-to-vector converter I've found.
I tidied up the paths in a graphics program; exported it to SVG; tidied up the markup and optimised with SVGOMG. This left me with a decent sized SVG for each layer.
Try ensure the viewbox is identical as it will make sizing much easier.
Now in HTML, we need to stack them:
<div class="landscape" role="img" aria-label="This is equivalent to an img alt attribute.">
<div class="landscape__layer">
<div class="landscape__image">
<svg viewBox="0 0 4000 1000" xmlns="http://www.w3.org/2000/svg">...</svg>
</div>
</div>
<div class="landscape__layer">
<div class="landscape__image">
<svg viewBox="0 0 4000 1000" xmlns="http://www.w3.org/2000/svg">...</svg>
</div>
</div>
<div class="landscape__layer">
<div class="landscape__image">
<svg viewBox="0 0 4000 1000" xmlns="http://www.w3.org/2000/svg">...</svg>
</div>
</div>
... and so on.
</div>
Remember accessibility! Despite being a whole bunch of markup, this is really a fancy image. We use role="img"
and aria-label
to make it accessible.
I didn't have the two wrapping div
s at first, but realised that wrappers for each layer allowed me to use flexbox. This made positioning the SVGs easier:
// wrapping landscape
.landscape {
background: var(--c1);
height: 75vh;
overflow: hidden;
position: relative;
// make each layer fill parent
.landscape__layer {
height: 100%;
left: 0;
position: absolute;
top: 0;
width: 100%;
}
// svg wrapper
.landscape__image {
// position at bottom of element in center
position: absolute;
bottom: 0;
left: 50%;
transform: translateX(-50%);
// set sizes that work for my image
max-height: 100%;
max-width: 300%;
min-width: 100%;
width: 2500px;
// use flexbox to center SVG elements
display: flex;
flex-direction: column;
}
// basic styling for SVG element
.landscape__image svg {
display: block;
height: auto;
max-width: 100%;
}
We now have a static landscape and are set up to make it more dynamic!
There are two popular methods to implement parallax on the web. The more performant implementation is a CSS-only solution using the perspective
CSS property with translateZ()
. This is what browser vendors suggest, as it allows the browser to render changes with the GPU. This makes it super quick and smooth and is how I tried to implement it for weeks.
Google Developer docs have a good example of this method.
Although it's great for simple implementations—I found that in my case it was unreliable. This was because:
transform-style: preserve-3d
on every element between my scroll element and my layers.I spent about two weeks trying to get this working before giving up and going for method two.
JS-based parallax has had a bad rep, as a few popular libraries weren't very performant or accessible. Their size was to deal with browser inconsistencies, but with modern CSS and JS we can do it ourselves without much work.
With CSS custom properties and calc()
we can come up with a light and neat implementation ourselves. In JavaScript we use window.requestAnimationFrame
and if the scroll position has changed we set it to a custom property.
// constant elements: your main scrolling element; html element
const scrollEl = document.documentElement
const root = document.documentElement
let scrollPos
// update css property on scroll
function animation() {
// check the scroll position has changed
if (scrollPos !== scrollEl.scrollTop) {
// reset the seen scroll position
scrollPos = scrollEl.scrollTop
// update css property --scrollPos with scroll position in pixels
root.style.setProperty('--scrollPos', scrollPos + 'px')
}
// call animation again on next animation frame
window.requestAnimationFrame(animation)
}
// start animation on next animation frame
window.requestAnimationFrame(animation)
That's it. That's all the JavaScript we need. As someone who loves CSS it feels great knowing that we can keep the JS simple and use CSS to implement this descriptively.
The real action is happening in the CSS, this is what we need to add to our previous styles:
.landscape__layer {
// parallax
transform: translateY(calc(var(--scrollPos, 0) * var(--offset, 0)));
@media (prefers-reduced-motion: reduce) {
transform: translateY(0);
}
}
The key line is the first transform
and it's custom properties. What we are doing is translating the layer down a certain amount based on the scroll position.
We use a prefers-reduced-motion
media query to remove the parallax effect for those who might get motion-sick or prefer less movement in their browsing.
The --offset
property is a value that would be between 0 and 1, and changes how much that layer scrolls. Let's look at what happens when we vary that property and scroll down by 100px
:
--offset: 0
— the element isn't translated and scrolls as normal;--offset: 0.5
— the element will be translated down by 50px
. This makes it look like it's moved 50px
;--offset: 1
— the element is translated down 100px
, it's in the same place it used to be. This makes it look like it's not moving with scroll;The --offset
property is the key to our parallax system. If each layer has a different value it will scroll at a different speed from the other layers. We can manually set how much each layer will scroll so it looks natural.
The way we apply this to our layers is using the style property. This way we can avoid adding any more CSS, no matter how many layers we have. We set the front layer to 0 so it scrolls with the content, and increase it with each layer. This is what worked for my image:
<div class="landscape" role="img" aria-label="This is equivalent to an img alt attribute.">
<div class="landscape__layer" style="--offset:0.96">...</div>
<div class="landscape__layer" style="--offset:0.92">...</div>
<div class="landscape__layer" style="--offset:0.9">...</div>
<div class="landscape__layer" style="--offset:0.86">...</div>
<div class="landscape__layer" style="--offset:0.83">...</div>
<div class="landscape__layer" style="--offset:0.8">...</div>
<div class="landscape__layer" style="--offset:0.75">...</div>
<div class="landscape__layer" style="--offset:0.4">...</div>
<div class="landscape__layer" style="--offset:0.2">...</div>
<div class="landscape__layer" style="--offset:0">...</div>
</div>
Notice the big gap between 0.4 and 0.75. If you look at the landscape structure, the loch is a lot further away than the trees. We produce the same effect by making the offset a lot further away from 0.
And here we have our final parallax landscape!
Thank you for reading! Next up we're going to take this landscape and add colour schemes—including one that matches the visitors local time!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Making a Parallax SVG Landscape - new site part 1 appeared first on alistairshepherd.uk.
]]>If you haven't tried it yet, visit my website and click the "paint bucket" icon in the top-right of my website to see the theme picker. Here you can change the colour scheme of the website.
There are four 'static' colour schemes of 'Sunrise', 'Day', 'Sunset' and 'Night'. These set the colours to a specific palette.
I implemented two special 'dynamic' colour schemes, the default of 'Live' and 'Cycle'. Live sets the colour scheme of the website to roughly match your local time, whilst Cycle is a 60 second loop animating through the four static schemes above.
The main point of this post is the colour changing functionality, but I'll briefly mention the 'Sun' animation too.
If you want straight at the code, enjoy! 👋
Note: This post is more technical and less visual than my previous one. There aren't many demos, and it's mostly code snippets from here on. You've been warned!
I have wanted to implement a 'live' functionality in my personal website for a few years. Something that makes my site feel more current and that evolves with the day excited me.
My first attempt at this was in my previous site, where I had a background video of a stream on the Isle of Skye. This was a simple 30s loop, but what I wanted was a 24-hour video that would be synced up with your local time. I liked this idea, but it was impractical thanks to the difficulty in getting 24 hours of consistent footage. It also turned out to be a pretty major technical challenge, I had no experience of streaming video and HLS and DASH weren't widely supported.
When I came up with the idea of the SVG landscape, this seemed like a perfect accompaniment. I could make the time in the 'scene' match up with your local time and demonstrate that through the colours and sun.
Initially I implemented a prototype of this with anime.js—a great JS animation library. When I boiled down the essential elements however, the problem was a lot simpler than I thought. There's more JavaScript here than my previous post but stick with me!
We are starting from the final CodePen in my previous post. First let us set up our colours in custom properties:
As we are going to be using JavaScript to 'enhance' this with the colours of our animation, we're starting with greys that roughly match the tone of our colours.
This helps us in a couple different situations:
<head>
. That means that for a brief period our fallback colours might be displayed before the JS kicks in. By choosing neutral greys it looks more natural than going from one colour to another—like the saturation is turned up from 0.So we can access them with JS later, I'm configuring my colours in the JS:
const config = {
states: [
{
at: 0,
name: 'night',
colours: {
c0: '#7da5d5',
c1: '#0c4e8f',
c2: '#00101f'
}
},
{
at: 6,
name: 'sunrise',
colours: {
c0: '#fed4d5',
c1: '#a496c4',
c2: '#2e2c3f'
}
},
{
at: 12,
name: 'day',
colours: {
c0: '#ffe2a6',
c1: '#fc813a',
c2: '#2f1121'
}
},
{
at: 18,
name: 'sunset',
colours: {
c0: '#ffad39',
c1: '#e17b17',
c2: '#1e0000'
}
}
]
}
We'll add to this later, and the at
property will become more clear with more code below. We are defining an array of different themes, giving each a name so we can look them up later, and defining our colour palette.
My website has 10 unique colours, I have reduced it to 3 in code snippets for simplicity. If you're interested in all 10 have a look at the CodePens!
In CSS we have the animation
and transition
properties. These help us animate between two values without needing JS. We should be able to use that to animate our custom properties right? Unfortunately, not right.
As great as custom properties are, at the moment they have limits. One of those limits is in animation or transitions. At the moment custom properties are strings, so the browser transition engine can't know how to interpolate between two values when they change.
This is one of the things that the Houdini Project is designed to solve, but it is currently Blink-only so that's not well-supported enough for us at the moment. The idea is you specify exactly the type of value a property represents (eg, colour) and the browser can handle interpolating it.
I found it difficult to tutorial-ise the animation JS so what I'm going to do is include my commented code. Feel free to go back to the CodePen above and have a dig around yourself, or get in touch if you have any questions!
// Configuration of colours and animation states
const config = {
// sets the setInterval interval and the progress function for each animation mode
anims: {
live: {
// A high interval as live changes very infrequently.
interval: 60000,
getProgress: now => {
// Current seconds elapsed this day, divided by number of seconds in the day
const time = (now.getHours() * 3600) + (now.getMinutes() * 60) + now.getSeconds()
return time / 86400
}
},
cycle: {
// A low interval as cycle changes in milliseconds.
interval: 50,
getProgress: now => {
// Current milliseconss elapsed this minute, divided by number of milliseconds in a minute
const time = (now.getSeconds() * 1000) + now.getMilliseconds()
return time / 60000
}
}
},
// States with 'at' specifying the time in hours the state should be.
// 'name' allows referring to it when we add themes later.
// 'colours' is object with key as custom property name and value as colour.
states: [
{
at: 0,
name: 'night',
colours: {
c0: '#7da5d5',
c1: '#0c4e8f',
c2: '#00101f'
}
},
{
at: 6,
name: 'sunrise',
colours: {
c0: '#fed4d5',
c1: '#a496c4',
c2: '#2e2c3f'
}
},
{
at: 12,
name: 'day',
colours: {
c0: '#ffe2a6',
c1: '#fc813a',
c2: '#2f1121'
}
},
{
at: 18,
name: 'sunset',
colours: {
c0: '#ffad39',
c1: '#e17b17',
c2: '#1e0000'
}
}
]
}
const root = document.documentElement
// This changes the interval and progress calculation between
// our dynamic animations 'live' and 'cycle'.
let animMode = 'live'
// Add first element of states to end so we have a seamless loop:
// night > sunrise > day > sunset > night
config.states.push({
...config.states[0],
name: 'end',
at: 24
})
// Declaring our animation loop in a variable allows us to end it when needed.
let animation
function startAnim() {
// Run our update loop immediately after starting.
updateAnim()
// setInterval runs our update loop with a predetermined interval
// based on the animation mode we are using.
animation = setInterval(updateAnim, config.anims[animMode].interval)
}
// If we need to end the animation, this function will stop it
// running again using clearInterval
function endAnim() {
clearInterval(animation)
}
// This runs every update cycle, getting the progress, calculating
// the right colours and applying them to the root element
function updateAnim() {
// Get the progress through the animation. getProgress returns a number between 0 and 1.
// To simplify working with time, we multiply this by 24 to get progress through the day.
const progress = getProgress() * 24
// Find the next 'state' we are transitioning to based on the 'at' property.
// The 'at' property sets at what hour that state should be at.
const nextIndex = config.states.findIndex(frame => {
return frame.at !== 0 && progress < frame.at
})
// The previous 'state' is the one before the next one, so we remove 1.
const lastIndex = nextIndex - 1
// Get the onjects for the last and next states
const lastState = config.states[lastIndex]
const nextState = config.states[nextIndex]
// Calculate the difference between the 'at' values of the previous and last states,
// so we can get our progress between them based on the progress we got above.
const diff = nextState.at - lastState.at
const progressCurr = (progress - lastState.at) / diff
// Loop through all the colours. 'key' is the cutsom property name
Object.keys(lastState.colours).forEach(key => {
// We use hex codes for colours for convenience, but it's a lot easier to transition
// seperate Red, Green, Blue values so we convert them to a [R, G, B] array
const lastRGB = hexToRgb(lastState.colours[key])
const nextRGB = hexToRgb(nextState.colours[key])
// Get the new RGB by using 'lerping' to find the value between the last and next
// colours based on how far we are through the current animation.
// The lerp function doesn't necessarily return an int so we round it.
const currRGB = [
Math.round(lerp(lastRGB[0], nextRGB[0], progressCurr)),
Math.round(lerp(lastRGB[1], nextRGB[1], progressCurr)),
Math.round(lerp(lastRGB[2], nextRGB[2], progressCurr))
]
// Apply the custom property to root using the name and our new RGB value.
applyColour(key, currRGB)
})
}
// As we have two different animation 'modes', we change the function used to work
// out the progress depending on that mode. See the config above for how they work.
function getProgress() {
const d = new Date()
const progress = config.anims[animMode].getProgress(d)
return progress
}
// A slightly bewildering regular expression that turns a hex code into [R, G. B] array.
// Well-tested though so I don't need to touch it!
function hexToRgb(hex) {
var result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex)
return result ? [
parseInt(result[1], 16),
parseInt(result[2], 16),
parseInt(result[3], 16)
] : null
}
// Using 'linear interpolation' gets the value between the start and end values based on progress
function lerp(start, end, progress) {
return (1 - progress) * start + progress * end
}
// Uses name of custom property 'key' and [R, G, B] array and applies to root element
function applyColour(key, colour) {
const colourString = 'rgb(' + colour.join(',') + ')'
root.style.setProperty('--' + key, colourString)
}
// Round number to 'places' number of figures after decimal.
function round(num, places) {
const power = Math.pow(10, places)
return Math.round(num * power) / power
}
// Initialise and start animation.
function init() {
startAnim()
}
init()
With the above code, we have an animated live colour scheme and the flexibility to extend it further. Let's do just that by creating methods to switch between 'dynamic' schemes and our named states.
We'll go through the basic code to change, and then a basic 'theme picker'.
In our configuration, we have set the progress function and interval for each dynamic theme. When we start the animation and when our updateAnim()
function run, they use the value of animMode
to choose the correct interval and progress function for the current mode.
This means all we need to do is stop the animation, change animMode
, and start it again. For example to change to 'cycle':
endAnim()
animMode = 'cycle'
startAnim()
And likewise, to switch to 'live', we would do the same process but instead set animMode
to 'live'.
We included the name property within our state so that we can refer to it when setting the theme. First we need to stop the animation, so that the dynamic state doesn't replace our changes when it next runs. Then, we need to find the colours for the state we would like to apply and apply them. We can do that with this short piece of code.
const theme = 'sunset'
endAnim()
const state = config.states.find(item => item.name === theme)
Object.keys(state.colours).forEach(key => {
applyColour(key, hexToRgb(state.colours[key]))
})
Line 3 uses the handy Array method 'find' which will return the item that matches our condition: where item.name
equals our theme name.
We then loop through all the colours of that state and apply them as we did for our dynamic 'themes'.
It's worth building out a theme picker for yourself, but here's a simple implementation to get us started:
<button data-active aria-pressed data-theme="live">Live</button>
<button data-theme="cycle">Cycle</button>
<button data-theme="sunrise">Sunrise</button>
<button data-theme="day">Day</button>
<button data-theme="sunset">Sunset</button>
<button data-theme="night">Night</button>
const themes = document.querySelectorAll('[data-theme]')
if (themes) {
themes.forEach(function(theme) {
theme.addEventListener('click', function(e) {
// remove active state from old theme buttons
themes.forEach(theme => {
theme.removeAttribute('data-active')
theme.removeAttribute('aria-pressed')
})
// add active state to clicked button
this.setAttribute('data-active', '')
this.setAttribute('aria-pressed', '')
// get slug for current theme
const themeSlug = this.getAttribute('data-theme')
// end animation
endAnim()
// if dynamic theme, set animMode, start animation and return
if (themeSlug === 'live' || themeSlug === 'cycle') {
animMode = themeSlug
startAnim()
return
}
// find theme state and apply the colours
const state = config.states.find(item => item.name === themeSlug)
Object.keys(state.colours).forEach(key => {
applyColour(key, hexToRgb(state.colours[key]))
})
})
})
}
The final piece to our landscape is a moving sun. You would have thought it would be easy to implement, but it turned out to be more tricky than I first thought.
Lets go over our requirements:
Due to all these reasons, my first thought of using animations becomes hard to implement. Respecting width, height and following an ellipse though sounds like a tricky challenge.
The solution ends up using our favourite feature the Custom Property, and exploiting the relationship between ellipses and the Sin function.
We can continue to keep our JavaScript minimal and respect the screen size by using transforms and elements the size of the screen. To our .landscape from the previous post:
<div class="landscape__sunWrap">
<div class="landscape__sun"></div>
</div>
$sun-size: min(4rem, 10vw);
$sun-movement-v: 30%;
$sun-movement-h: 40%;
.landscape {
&__sunWrap {
$distance: 10;
bottom: 10%;
height: 75%;
left: 0;
position: absolute;
transform: translateY(var(--scrollPos, 0));
width: 100%;
@media (prefers-reduced-motion: reduce) {
display: none;
}
}
&__sun {
height: 100%;
left: 0;
position: absolute;
top: 0;
transform:
translateX(calc(#{$sun-movement-h} * var(--sun-h)))
translateY(calc(#{$sun-movement-v} * var(--sun-v)));
width: 100%;
// the actual sun element
&::before {
background: #fff;
border-radius: 50%;
content: '';
height: $sun-size;
left: 50%;
position: absolute;
top: 50%;
transform: translate(-50%, -50%);
width: $sun-size;
}
}
}
Using this code the positioning of our sun is based on rails, constrained by the size of our landscape. --sun-h
and --sun-v
are numbers between -1 and 1 which are used in the calc
within our transform
property to set how far up/down and left/right the sun is.
The advantage of using an element filling our landscape means that as the element is narrower, the less the sun moves horizontally. This leaves us with minimal JS:
function sunPos(progress) {
const sunWrap = document.querySelector('.landscape__sunWrap')
if (sunWrap) {
const sunH = -Math.sin(2 * Math.PI * progress / 24)
const sunV = -Math.sin(2 * Math.PI * (progress - 6) / 24)
sunWrap.style.setProperty('--sun-h', round(sunH, 3))
sunWrap.style.setProperty('--sun-v', round(sunV, 3))
}
}
This involves maths that I'm pretty sure I was taught in High School and University, but I am certain I have almost entirely forgotten! For a square element, this would create a circular movement but by splitting it up into separate components we have our ellipse.
We then run sunPos
with our progress in our updateAnim()
function and using the state.at
property after setting a static theme.
If you've gotten this far, congratulations and thank you for sticking with me! Here's our final landscape, as above:
This is not the easiest post to read by any stretch of the imagination, but I wanted to get down a lot of info and I struggled to in a way that felt natural. Initial drafts were tutorial-like before I realised I was writing a 10,000 word tutorial!
I am planning to write more, but will be making them shorter and simpler than this one. Keep an eye out for future posts about:
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post SVG Landscape with live colour theming - new site part 2 appeared first on alistairshepherd.uk.
]]>:focus-visible
! Although browser support is decent, Safari is still working on this important accessibility feature.
In the meantime, we can load the WICG focus-visible polyfill to offer improved focus styles in browsers that yet don't support it. Ideally we don't force browsers that support focus-visible to download a polyfill when it's unnecessary - and in future when all modern browsers support the feature, we don't want to ship that redundant code.
Here is a snippet we can use to only load the focus-visible polyfill if it isn't supported! Insert this before the closing </body>
and change the script.src
to point to your local copy of the polyfill (or use an asset CDN like jsdelivr).
<script>
try {
document.body.querySelector(':focus-visible');
} catch (error) {
var script = document.createElement('script');
script.src = "/js/focus-visible.js";
document.body.appendChild(script);
}
</script>
You'll also need to write CSS to handle focus indicators in three circumstances:
This is my setup for these cases:
/**
* My focus styles
*/
:focus {
outline: 2px dashed currentColor;
outline-offset: .25rem;
}
/**
* When focus-visible is supported:
* remove outline when :focus but not :focus-visible
*/
:focus:not(:focus-visible) {
outline: none;
}
/**
* when polyfill loaded:
* remove outline when :focus but not .focus-visible
*/
.js-focus-visible :focus:not(.focus-visible) {
outline: none;
:focus
indicator is shown.outline: none
declarations are combined into a single rule. Due to how the browser ignores any rules with selectors it doesn't understand, this won't work. You may need to disable optimisation on this step, or in my case I changed one of the outline: none
declarations to outline: 0
. This CSS works the same, but means they won't be combined into a single rule by most minifiers.If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Conditionally loading a polyfill for :focus-visible appeared first on alistairshepherd.uk.
]]>:focus-visible
polyfill only if the requesting browser doesn't support it. Similar to that, this snippet will help you to load an image lazyloading JavScript library, only when native lazyloading isn't supported.
Lazyloading images has been a good practice for web page performance for some time, and recommended by tools like Lighthouse, PageSpeed Insights and WebPageTest among others. This traditionally had to implemented using a JS library like Lazysizes.
These libraries monitor what is visible within the browser and only when an image is about to come into view is it loaded. This means that browser won't need to download any images that are never seen - reducing data use and potentially improving front-end performance.
Given the prevelance of this practice, the Chrome team and HTML Spec folk introduced lazyloading behaviour natively into the browser via the loading
attribute on img
tags. We can already make our current img
tags lazy by adding loading="lazy"
to the element like so:
<img src="/assets/example.jpg" alt="Example image" width="200" height="100" loading="lazy">
Browser support is decent at around 70% between Chromium-based and Firefox-based browsers, but it unfortunately isn't yet in Safari or for iOS at all.
As with my focus-visible conditional loading, ideally we load a JavaScript library/polyfill only if the new feature isn't supported.
The progressive nature of the loading
attribute means older browsers without support will still load the images. That is normally great as it keeps the web backwards-compatible and often usable in old browsers and devices. In this case however, it makes it a little tricky for us to prevent the loading of images outside of the current view.
Browsers that don't support the attribute ignore it and will just load the images normally. By the time we've loaded our script, the browser may have already downloaded many or all of the images on the page unnecessarily.
What we have to do is provide our markup in the format of the lazyload library we are using. We then check for support of native lazyloading and either load our library or run some JS to adapt our markup to 'normal'.
Before the closing </body>
we include our conditional loading snippet like this:
<script>
let hasLibLoaded = false;
// in a function so we cn re-run if data is added dynamically
window.loadingPolyfill = () => {
// check if loading attribute supported
if ('loading' in HTMLImageElement.prototype) {
// get all <img> and <source> elements
const images = document.querySelectorAll('img[data-src]');
const sources = document.querySelectorAll('source[data-srcset]');
// loop through <img>s setting the src attribute and srcset and sizes if present
for (let img of images) {
img.src = img.getAttribute('data-src');
const srcset = img.getAttribute('data-srcset');
if (srcset) {
img.srcset = srcset;
}
const sizes = img.getAttribute('data-sizes');
if (sizes) {
img.sizes = sizes;
}
}
// loop through <source>s setting the srcset attribute and sizes if present
for (let source of sources) {
source.srcset = source.getAttribute('data-srcset');
const sizes = source.getAttribute('data-sizes');
if (sizes) {
source.sizes = sizes
}
}
// if loading attribute is not supported
} else {
// check we haven't already loaded the library
if (!hasLibLoaded) {
// create script element with src pointing to our library and add to document
const script = document.createElement('script');
script.src = '/js/lazysizes.js';
document.body.appendChild(script);
// mark library as loaded
hasLibLoaded = true;
// lazyloading library has already been loaded
} else {
// depending on your library you may need to run findNewItems() or something along
// those lines to adapt new content. Some libraries including lazysizes don't need this.
}
}
}
// run our loading polyfill
window.loadingPolyfill();
</script>
We assign our function globally on the window
object so that if any content is loaded via JavaScript (eg AJAX or client-side-routing) you call call window.loadingPolyfill()
again and it will re-run including new images.
script.src
points to your JS library - locall or using a CDN like JSDelivr.data-src
, data-srcset
and data-sizes
. Many use this convention but not all, eg Uncloak uses data-uncloak-src
.legacy.js
script that has the same functionality as our supporting case, that will fall back to standard image loading for old browsers.Despite minimal, this will have a performance impact on both supporting and non-supporting browsers.
In theory browsers are able to start downloading high-priority images before the full document is parsed. Because there is no src
atribute, our solution stops this from happening until our script runs near the end of the document. Unless you have a very long HTML document though, it's unlikely this will be more than a few milliseconds. Regardless, I would suggest avoiding this practice for your most important above-the-fold images like logos or hero images.
As we are loading our JS library asyncronously, this generally means it has a lower download priority than it would otherwise. There is no easy way around this, but I couldn't see any conslusive impact when testing on Safari. Take that with a pinch of salt though, it will depend a lot on how your website is built and the visiting device. I don't think this will be very significant however.
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Conditionally loading a native image lazyload polyfill/library appeared first on alistairshepherd.uk.
]]>Although all of this innovation is great, it can sometimes be pretty difficult to keep up with. One of the ways I keep on top of the latest changes, trends and new technologies is by following the blogs and newsletters of companies and developers leading the industry.
I'm always on the lookout for more blogs or newsletters! If you have any suggestions I should check out get in touch!
I thought I'd crack out a post I can point people to with a list of the feeds and newsletters I subscribe to, with a bit of hopefully useful info so you know if you want to subscribe yourself. Most of these are focused on:
I find it quite fun that the format for the majority of these feeds about modern web development is RSS, a technology that has existed since 1999! If you're looking for a feed reader I use Feedbin, which is a great cloud RSS and newsletter reader across multiple platforms. You can also find an export of my RSS subscriptions on my Github.
Useful for keeping up with changes to the biggest browsers. Topics can be quite technical and writing not particularly approachable, but probably worth a look. I tend to scan the titles to decide if it's something that will likely affect me before reading.
Any blogs by development agencies or dev-related products. A few of the agency blogs have some great tips, best practices and open-source tools. I tend to be a bit wary of product blogs as they're always trying to sell but there are some decent ones out there.
Blogs generally aimed at sharing knowledge about web development. Separate from Developers and Magazines as they feel somewhere between the two.
These are individual devs and designers who are experts in their field. Not as frequent posts, but there are some real gems here. In alphabetical order for simplicity, and just a short summary as there's a few!
The big blogs that feature articles from many different authors. A few only run for a month a year, others are year-round. If you only want to subscribe to a few blogs, make it these ones.
Some feed readers allow you to receive email newsletters in the same place, I used Feedbin for this. Regardless, these are some great newsletters to subscribe to:
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Front End Web Development Feeds and Newsletters appeared first on alistairshepherd.uk.
]]>My first search engine experience came from this when Altavista was going to be the big player in web search (according to the book). The book had everything from the finest web portals, random neocities pages with all sorts of wacky graphics and the early web experiments that captured my imagination.
One of those experiments I remember the best was called "Lost in Translation". Like most of the sites and links I remember it's long dead now, but at the time it was my favourite. Lost in Translation used the then-new Babel Fish Translation (later bought by Yahoo) API to take any input text, translate it through several different languages and return the resulting garbled mess back in English. I couldn't wait to get home after school, pretend I was doing schoolwork and instead try all sorts of sentences and phrases and see how they turned out.
Fast forward 20 years and I'm a web developer myself. It's almost certainly that booklet and those sites that started me on this journey and made me fall in love with the web. Lost in Translation is still something I think about every so often, an example of what the web was in those days and something I'd like a bit more of now.
Anyway, I was asked by some friends to make a quiz round. Inspired by Lost In Translation, I thought I'd make a quiz round where recipe titles were run through several languages and you had to guess which one. I'll be honest - the round was absolute fucking shite. The average score was 1/10 and the very best 4/10 - I had created questions so bad people did worse than random guesses would provide.
The result of it wasn't all bad though. When I was testing using the Google Translate UI I realised some recipes stood unchanged whilst some completely lost almost instantly. (Turns out that translating modern fad-based recipe names into little-known isolated languages doesn't work too well.) I realised I'd need to do this quite a lot, so wrote a small bit of JavaScript using the Google Translate API to make the process a bit easier.
There's nothing special about it, but having some code that replicates that functionality of Lost In Translation turns out to mean quite a lot to me. It's simple, only took me about an hour (most of which was trying to get a free translation API working) and has achieved it's purpose - but I'm unable to just let it go.
Putting it and these words on the internet pleases the archivist in me, the same part of me that is sad I no longer have that booklet and link rot made it practically useless anyway.
I was just trying to write a basic readme when this monologue came, but I guess now there's morals about link rot, web archival, nostalgia and wishes for a better web.
But enough of that, have some fucking code if you want it.
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Web Nostalgia and Lost In Translation appeared first on alistairshepherd.uk.
]]>I've had a very chaotic few months at the end of 2021, with work, moving house and my first steps into giving tech talks! That has meant that all the blog posts and side projects I planned were left behind, but with some free time before Christmas I thought I'd squeeze a quick and easy post out!
I hope 2021 has been kind and merciful to you, and you have a good holiday period if you're taking one! Thank you so much for reading and supporting me/my work, it means a huge amount! ❤️
So I'm finishing off this year with a very festive post... my website tech stack?! 🌲
In about a month this site will be a year old! Recently a few people have asked me about the tech stack so it's about time I put it and my thinking down properly. I'm really enjoying it!
The entire build is designed to make it as easy as possible for me to work on new content and tweaks after months or years of not touching it. It's been great so far and I find it easier to work on when I come back to it than other platforms I've worked on.
Here's a TLDR if you're not interested in the reasoning:
For Jamstack hosting I really like Netlify. I think their product is brilliant and love how easy it makes deploying and hosting a website. It has tons of features, great documentation and I like their company ethos, principles and staff. My site is primarily hosted with them under the free plan.
However in case they have a major incident or I disagree with their direction, I have a backup copy of my site ready to go with Vercel. If I needed to switch to them, all I'd need to do is update the DNS and it would be done within a few hours if needed. I don't anticipate needing to, but when it's so easy and free to have a backup website I like having the option.
I use Eleventy as my Static Site Generator (SSG) for data manipulation and HTML generation. My site is fairly simple, all I really need is handlebars-style templating, markdown support and reusable JS snippets for custom functionality.
This would give me a lot of SSG options but I had a few priorities that were important. I wanted static HTML without any client-side JavaScript, flexibility with data and structure, easily extendable with JavaScript, and to be in full control of the output. Eleventy was the ideal tool for the job here.
At the time of creation it was a year from stable release but already provided a quick, extendable platform that is easy to work on and has been more stable than at least 4 other major SSGs I've worked with!
I write my blog posts in Markdown but the rest of the site uses Nunjucks for templating.
I use gulp for my build process and tasks as it provides me a lot of freedom with how I want to run tasks and implement builds.
Many people consider gulp to be dated/dead but honestly I much prefer it to many of the big 'build tools' used in development at the moment. Webpack, Rollup and Parcel seem great at first but I've had difficulties with configuration or needed to use gulp alongside them for custom processes.
A year on I would consider simplifying further and using simple node scripts instead of gulp. These would have the benefits of simplicity and stability over time—I would also appreciate fewer dependencies. For getting off the ground quickly though gulp has the edge for me, with a still huge ecosystem and so many previous projects I can pull gulp tasks from.
A key feature is I can write my own tasks in JS and don't need to prescribe to a config system. The tasks for this site are fairly standard but for long-term maintenance it's useful being able to write my own without having to learn a new 'plugin' syntax. I can also implement builds using whatever tools I like—an example being choosing to use esbuild for JS bundles.
It's not as fancy as some of the latest tools like Vite and Snowpack, but in reality I don't need HMR or instant refreshes for a simple site. And although it's not at the cutting-edge, the API and project stability is helpful for coming back to an older project.
The CSS is mostly handcrafted without any libraries so I can have full control over the structure and performance. I write in Sass (Scss flavour) as I'm used to many of the utilities and conveniences it provides like importing partials, concatenated nesting and variables.
I say 'mostly handcrafted' as I use the utility class generator Gorko to generate classes for spacing, sizing and colours. Utility classes are great for simple rules like changing display
or spacing, both for convenience and performance reasons. For anything that has more than a couple utility classes though that becomes a 'layout' or 'block' that is written using BEM-like classes (eg .nav__link
).
The structure I follow is a variation of Andy Bell's CUBE CSS with Layout, Utilities and Blocks. For performance optimisation I have a 'critical' CSS file that includes CSS important to display the top of pages correctly, and this is embedded in the head
of every page. I use an 11ty transform with PurgeCSS to strip out any unused rules for each page. This makes the first load of each page as fast as possible, and then the rest of the styles are loaded in a 'main' CSS file that is there for lower down the page and cached for subsequent navigations.
As I prioritise performance, almost all JavaScript is hand-written and vanilla — no frameworks. This allows me to include only what is needed, which thanks to the capabilities of modern browsers ends up being pretty small. Not including libraries and frameworks improves performance on all devices and means my code is more maintainable in future.
I really like esbuild for transforming source JS, it's extremely fast, very simple and the gulp-esbuild plugin is easy to use. I have it set up to turn a set of modules into a single bundle for performance reasons, minify for production, generate sourcemaps and transform modern syntax to a list of supported browsers.
The only library I use is Barba for Client Side Routing (CSR) to maintain the state of the landscape and themes seamlessly across pages. Although I don't normally care much for page transitions and client-side navigations, I couldn't come up with a native solution that was quick, wasn't jarring and maintained the effect. I broke my 'no libraries' rule here as client-side routing isn't easy to get right. Barba does a decent job, is fairly small and I load it separate from the main bundle with low priority to avoid a performance hit.
I'm a big proponent of using Image CDNs, it makes the build process simpler and quicker and further development easier. For more of my thoughts see my recent talk Making Assets fly on the Jamstack with Image CDNs.
I use CloudImage for this site as I like it's simplicity and the free tier is generous enough to cover my few images. The performance is good but I'd like to see better, including AVIF support. Imgix and Cloudinary both perform better but I'm happy with CloudImage for the moment.
I've written a custom Eleventy shortcode with a few parameters to generate src
and srcset
attributes to do what I need. This would make it easy to switch to a different provider if I wanted. To avoid the performance impact of using a different origin I proxy CloudImage requests through Netlify using redirects.
I use Red Hat Display for titles and Literata for the body. Both are on Google Fonts and open source so I can download and manipulate them freely.
To mitigate the performance effect of custom fonts I host them myself and preload the font files. I also reduce their size by subsetting them to US ASCII characters using Glyphhanger which cuts their size almost in half. Thanks to Andy Bell for the font combo!
I'm really happy with this stack and I'm hopeful it'll stand the test of time. My previous Next.js personal site was an absolute nightmare of dependency updates a year on from launching so we're doing better than that.
If you're interested in any specific implementations the source code is public on GitHub. Note that it's not open-source and licensed for re-use, but if you're looking for a similar setup I'd encourage taking a look and learning from the code!
Whilst you're here check out my other posts about how I built this site, about the dynamic functionality of the landscape and colour themes:
Thank you for reading, best wishes to you and yours, and take care!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Personal site stack for alistairshepherd.uk appeared first on alistairshepherd.uk.
]]>Many websites implement a notice that doesn't allow opt-out, some offer an option that does nothing, whilst others only offer an opt-out solution - conveniently after they've collected all of your data.
Cead (pronounced kee-yed) is a cookie and tracking consent manager that is simple, lightweight, easy to implement and free. It's designed to help you implement a simple Accept or Deny dialog that will actually enable or disable tracking.
Cead is primarily created in response an increase in unsolicited web surveillance, but also to assist with meeting the standards of regulation including the EU GDPR & ePrivacy and California's CCPA. As privacy legislation becomes more strict it's important that solutions offer compliant opt-in and opt-out controls which Cead offers at it's core.
Tracking on the web has long been a difficult topic. The interests of business owners, SEO teams, Ad vendors, site users and lawmakers become almost impossible to resolve and frequently ignore each other.
I'm of the opinion that a site should have no tracking. This site has no analytics or anything, because your browsing is your own business. Check out Jeremy Keith's "Ain’t No Party Like a Third Party" for his insight on third-party scripts.
I find however that is an impossible stance to maintain when building sites for other people. They are often used to tracking metrics to evaluate their success, generate leads or target their services.
I've worked in agencies where I've seen and worked on a lot of websites for a variety of clients. They vary in purpose, build, location, size and much more, but one thing almost all have in common is they handle tracking terribly.
This may be familiar to you, but if not let me demonstrate the situation. We build a site for a client and add Google Analytics to it - pretty standard. Google Analytics has an easy way to allow people to opt-out by setting a global variable so we integrate a wee popup that allows the user to opt out.
That works great until the client gets an SEO expert who wants to track conversions better. They ask you to add a couple more scripts and you dutifully do so, but these have no way to opt out so all you can do is add them.
Later on, they want to add more scripts so they either ask for a text box to add them arbitrarily, install plugins, or install a Tag Manager.
Before long, the site has 5 analytics scripts, 10 conversion trackers and a screen recorder. These may not respect the user's privacy settings or have a way to opt out, and the website could slow to a crawl.
Some developers will give up at the beginning of this process and instead of asking consent put a message saying "This site uses cookies and tracks you. Deal with it or fuck off".
There are two reasons why this is a problem. Ethical and legal.
Ethically, if this is your site you are stalking your users - standing 2 metres behind them as they peruse your store. The level of what is acceptable here can be debated, but tracking someones every move without their ability to consent to this is not justified. Place yourself in the shoes of someone who is being tracked across the web by several trackers, without any knowledge that potentially every interaction and details about their computer and location are being harvested and stored. It's hard to dispute in those circumstances.
This is also illegal in many jurisdictions. Consumer privacy laws like GDPR and ePrivacy in the EU, and CCPA and similar in American states requires some level of consent to web tracking. I'm not a lawyer so contact one for proper advice, but this gist is at minimum you need a way for users to be able to meaningfully opt out of tracking.
The big problem with the requirement to offer an opt out is that this is very hard to do.
As I mentioned earlier, some scripts like Google Analytics offer a method to opt out. This still isn't ideal as you're loading a tracking script and then checking if you're allowed to run it, but it at least gives you some control.
However that one of few tracking scripts I have come across that allows a way to opt out. Lots of other scripts will happily run as soon as they load, without regard for consequences. Even those that do have methods to opt-out, may be individual for each service and be a nightmare to manage.
Developers can deal with this by dynamically adding scripts under certain conditions, but clients will want to add their own and may not consider the consequences.
As developers we're left in a difficult position. Laws require that tracking can be opt-out, but we have no way to do so.
The way to fix this is to be in control of all tracking scripts, and then load them ourselves in response to a consent status.
There are many solutions to do this as investors have monopolised on businesses grappling with the issue of tracking and consent.
Some large 'privacy-focused' corporations offer pricey 'hosted consent solutions' that supposedly solve all your problems. However when I load the site of one, my browser tells me it's blocked 14 trackers.
If you've ever been annoyed by a cookie popup, it's probably a solution like this. A big annoying popup that makes opting out difficult and will send all your preferences to a tracking service to track your consent.
My opinion is that some of these companies are morally corrupt. Tracking the consent of users on a remote server is still tracking and they charge extortionate fees to fix a problem their own investors created.
I think the fix is a lot easier. Our webpage only runs tracking scripts when we say so. That's why I made Cead Consent.
Cead Consent is a small library designed to solve the issue of tracking consent by controlling when scripts can run on the client-side. By making a tiny modification to tracking scripts we can load them on-demand in response to consent status.
It is designed to be extremely simple, easy to use and lightweight, and I'll give you a quick demo of how you would use it to solve the problem of consent.
Check out the GitHub repo for full instructions on installation and usage.
First we need to install Cead. It can either be loaded from a CDN or installed via npm
, here I'll use the CDN to make it easier. We need to add a CSS file, a JavaScript file, and a little bit of HTML:
<html>
<head>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/cead-consent@1/dist/cead.css">
</head>
<body>
<p>Hi! Could we please enable some services and cookies to improve your experience and our website?</p>
<div class="cead__btns">
<button class="cead__btn cead__btn--decline">No, thanks.</button>
<button class="cead__btn cead__btn--accept">Okay!</button>
</div>
</div>
<main><!-- your page content --></main>
<script src="https://cdn.jsdelivr.net/npm/cead-consent@1/dist/browser.js"></script>
</body>
</html>
Although Cead consent can be used with all sorts of tracking scripts or pixels, I feel it's at it's best when combined with a tag manager like Google Tag Manager.
We manage tracking scripts (and images) by modifying their code slightly so they'll only run when Cead allows them to. When used with a Tag Manager the client or SEO teams can add as many scripts as they'd like to Google Tag Manager and we need to modify only one script for Cead.
When you copy your script from Google Tag Manager, it will look something like this (with a different GTM_MEASUREMENT_ID):
<script>
dataLayer=[];
(function(w,l){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'})})(window,'dataLayer');
</script>
<script async src="https://www.googletagmanager.com/gtm.js?id=GTM_MEASUREMENT_ID&l=dataLayer"></script>
See that last line, the <script async src="...">
? All we need to do is change the src
attribute to data-src
, and add the data-cead
attribute, like so:
<script>
dataLayer=[];
(function(w,l){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'})})(window,'dataLayer');
</script>
<script async data-src="https://www.googletagmanager.com/gtm.js?id=GTM_MEASUREMENT_ID&l=dataLayer" data-cead></script>
And that's it! With the installation of Cead and that small change to the script tag we've made it so users can choose to consent to tracking or not and their choice is respected.
Although the best situation is to avoid adding tracking to sites where possible, it often isn't possible. The best situation then is to use a lightweight, simple consent manager that won't frustrate users, will respect their consent choices and is free and open-source.
Cead has more options including managing inline scripts, tracking 'pixels', an 'opt-out mode', cookie removal and more. Check out the documentation on the GitHub repo to see all it can do!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Managing tracking consent with Cead Consent appeared first on alistairshepherd.uk.
]]>Today, I wanted to write a little about the section dividers used on my site. These ones:
If you've done any game development they may seem familiar, they're nothing particularly new! They are however a neat thing you can do with SVG and I love those!
For people who just want to dig into a demo, here you go!
If you want more of an explanation then here we go!
You may have noticed, but this website has a bit of a theme. Pat yourself on the back if you guessed that it's a mountain/landscape theme.
The header and colour changes were the entire basis for my new site, and the colour scheme was to be very simple but impactful. It felt fairly natural that the background of page section should vary between different colours within the theme to separate them. What didn't feel natural though was the hard straight line between them. I played around with curved lines, skewed them, added wobble, but none seemed to feel quite right.
At this point my sister suggested I use a mountain ridge, matching the style of the header. I initially produced a simple SVG manually and inserted it between each section.
I liked how this looked, but when two were visible on screen at once it looked a bit silly them being identical (recreation below).
I didn't really want to manually create more, although it would have been a quick workaround it didn't feel like it was really a solution. My temporary solution was to manipulate the one I did have, using transform
to flip, rotate or scale it so it looked slightly different each time.
It bugged me that my site just had the one ridge design, but I didn't really like any of the solutions I came up with.
Some time later, I read an article from the Joy of Computing newsletter about terrain generation in game development. I really like the Joy of Computing, although I don't have much time for keeping up with the wider programming industry, their newsletters are cool projects or posts about different areas I don't normally follow like Game Development, DevOps, Hardware or Networking to name a few.
Although the post was not really relevant to me, it made me realise that terrain generation was exactly what I needed! A method to create unique 'ridges' generated every time I needed a new divider.
The output format was pretty easy, it had to be SVG. That way I could generate it ahead of time and embed it in the document and not need to rely on client-side JavaScript or outputting a large image file. For my use-case I basically needed a shape with variable top and cover the below area to match the background colour.
I needed a way to convert however I generate the points of the line to an SVG path
format. My input array in most cases was in the format [ [ x, y ], ... ]
, acting as a programmatic dot-to-dot. Turns out that although the path
syntax seems a bit complex, when you're building it ends up making a lot of sense. SVG has different 'commands' which do certain things with a few parameteres. Check out the path syntax on MDN for them all, but we're mostly interested in L
which draws a line to the specified absolute point. With a viewBox
that matches our generation coordinate system we can convert it like so:
// convert points into SVG path
function convertPath(width, height, points) {
// add first M (move) command to go to the first point
const first = points.shift()
let path = `M ${first[0]} ${first[1]}`
// iterate through points adding L (line) commands to path
points.forEach(val => {
path += ` L ${val[0]} ${val[1]}`
})
// close path down from the last point to bottom-right, bottom-left, then back to start
path += ` L ${width} ${height} L 0 ${height} Z`
return path
}
In my keenness, I jumped straight in with my first thoughts. I use Math.random
to work out where the next position is and keep going until I've done the whole width:
Ah. Not quite what I was going for, less like a mountain ridge and more like a bed of nails. Maybe the issue is that I'm used fixed intervals, so I tried random intervals too:
Yeah, that looks really cool! Not what I'm wanting though - it has too much randomness and most of the time it just doesn't make sense.
After my first attempt, I actually did some research on terrain generation. I wanted something very simple I could implement myself in JavaScript and very fast.
I discovered the Midpoint Displacement Algorithm which seems to fit the bill perfectly. It's a simple algorithm and isn't very often used in modern games thanks to a lack of sudden steep inclines, overhangs and such, but for a mostly rolling ridge as I wanted it's perfect.
A short summary is how it works is by drawing out a straight line, and then splitting it into two segments at the midpoint. We then take that midpoint and 'displace' it—move it upwards or downwards— by a random amount. We then take the two segments and do the same thing, splitting them in two on a midpoint and displacing that midpoint. Each iteration, we reduce the amount each midpoint can move so as the segments get smaller we get finer and finer detail.
If you're interested in the theory behind it or the implementation I would recommend reading "Landscape generation using midpoint displacement" by Bites of Code. This is a great article about implementing this in Python, and it explains whats happening and why really well. I found it when I was implementing it myself, and most of my code is a JS adaptation of their Python code.
I made a few tweaks and voila! Check out the demo for the code and result:
This works really well, and generates extremely quickly. You can play with the variables at the top of the file to change the dimensions, fiedlity and roughness.
By running the output SVG through SVGO it ends up being pretty small too! This is exactly the method you see around my site at the time of writing.
I did make a third attempt, using Simplex noise to generate a terrain map with higher fidelity, cliffs, overhangs and flatter regions. I didn't get very far with it however, as I didn't particularly like the effect for the divider—it pulled away too much attention. It was also significantly slower to generate the SVG was quite a lot larger so I ended up ditching it and sticking with attempt 2.
It is very fun to play with terrain generation though so I'd love to play with this some more in future!
Here's the final demo of the divider, as used on my site:
I implemented this server-side with an Eleventy Shortcode, but as it's JavaScript you could easily use it on the client instead. That's what I've done in the demos throughout this post.
There are so many examples of where web designers and developers can learn from game design and development. Video games have so many examples of unique, creative and interesting challenges and solutions in their design and development that we could learn from. This is definitely a case where a fairly standard technique used by game developers can be used for creative result on the web.
Now go have a play and implement something like this yourself! Look at any games you play, or find out a little bit about an industry you aren't as familiar with and see if there's anything you can learn from to make more creative and cool websites!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post SVG generative mountain ridge dividers appeared first on alistairshepherd.uk.
]]>If you want to dive straight in check out the Async Alpine GitHub repo for the docs!
This is a companion post to "Code Splitting in Alpine.js", a post that goes into more depth on why we would want to split up our code more, and the principle on how we do it in Alpine. Check out that for more depth, stay here for how to get started with Async Alpine.
There's a few different methods of installing, depending on how you load Alpine.
If you load Alpine via a CDN script, do the same with Async Alpine:
<script src="https://unpkg.com/async-alpine/dist/async-alpine.script.js"></script>
<script src="https://unpkg.com/alpinejs/dist/cdn.min.js"></script>
For npm installations, install with npm install async-alpine
and include it in your bundle:
import Alpine from 'Alpine.js';
import AsyncAlpine from 'async-alpine'
AsyncAlpine(Alpine)
// any components or plugins go here
Alpine.start()
Async Alpine leans into and relies on ES Modules to dynamically import components. This supports all modern browsers and keeps the package fast and lightweight.
An ES Module Alpine component looks like this:
export default function myComponent() {
return {
message: 'hello!',
init() {
alert(this.message)
}
}
}
It's common to write Alpine components like this already, so this might look familiar! The key thing is that the file uses export default
ES Module syntax to export the component function.
If you ship handwritten JavaScript then you can write your component similar to above, pop it in your assets directory and you're golden!
If you process JS with a build tool or bundler you may need to do some work to output modules in the right format. This will depend on your build tool, but the majority are easy to set up:
You write your Alpine components as normal with Async Alpine, and add a couple of attributes to your component root:
<div
x-data="myComponent"
ax-load
ax-load-src="/assets/my-component.js"
></div>
The ax-load
attribute declares that this is managed by Async Alpine and declares the strategy—we'll leave that as the default for now.
In ax-load-src
you add the public URL of your component module. Here you can use relative URLs (/assets/component.js
) or remote full URLs including the domain name (https://example.com/component.js
).
Now your component is loaded asynchronously when it is present on the page! No need to load it everywhere in your bundle, it'll only load if it's needed.
We left ax-load
as the default previously, but we can be more specific than that! By default Async Alpine will load components 'eagerly', that means if the component is used on the page it will be downloaded as soon as it's found.
That's perfect for high-priority components at the top of the page, but for less important components we can use other rules to load them when they're needed. This component will load when it comes into view using the 'visible' strategy:
<div
x-data="myComponent"
ax-load="visible"
ax-load-src="/assets/my-component.js"
></div>
At the time of writing, Async Alpine has six different strategies:
eager
—load the component as soon as it's found;idle
—waits until the browser isn't busy;visible
—load the component when the user scrolls close to it;media
—in response to any browser media query!event
—use a DOM event to trigger loading at your command;parent
—for loading nested components smarter.These can be used as you'd like, and even combined for really advanced loading strategies!
A carousel at the bottom of a page that only displays on small screens? ax-load="visible | media (max-width: 768px)"
has you sorted!
An 3D model viewer that runs on a button press but needs to ensure it's parent is loaded first? ax-load="event | parent"
will do the trick.
Async Alpine is still in development and the API isn't totally stable quite yet. That said, I use it successfully on production sites and would encourage you to give it a try!
There's more information, installation instructions, examples, and advanced settings in the documentation on GitHub.
If you use it or have any questions I'd love to chat with you! I am very keen to see how it can help improve the websites you build, get feedback from people who have used it and make improvements in response!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Async Alpine — Asynchronous Alpine component loading appeared first on alistairshepherd.uk.
]]>At Series Eight we love the JavaScript framework Alpine.js. Alpine is a lightweight JS library that allows you to add interactivity and JS to static HTML with attributes using vanilla browser DOM. It makes it easy to add JavaScript to your existing HTML rather than replacing it like more traditional JS, but puts click events and text insertion right where it happens in the markup like more modern.
This post isn't meant as an introduction to Alpine, for that check out Alpine.js: The JavaScript Framework That’s Used Like jQuery, Written Like Vue, and Inspired by TailwindCSS on CSS Tricks.
There are two main ways of writing components in Alpine. For simple components you can write a JavaScript object as a string in the x-data
attribute of your component. This is great if there's not a huge amount going on, but you can't import other libraries or transpile modern syntax, and it lacks the syntax and formatting help that writing in .js
files provides.
For these more complex components, you can write them as a JS function and declare them with Alpine.data()
.
I tend to write basic components inline but use JS components for anything with more than a couple variables/methods.
Moving away from Alpine for a moment, one area that the entire JS and web industry has been grappling and struggling with is code splitting. When you use libraries or frameworks, the easiest implementation is often bundling everything into a single JavaScript file that has the framework and all your components.
This is simple to build, but in terms of performance can be inefficient and makes little sense. Components appear in different places throughout the site and a user is unlikely to encounter them all. For example, in a typical eCommerce store you might have a static landing page. On the same site the product pages load a 3D model library to show off the products. In a bundling pattern, even though the landing page is static he user has to download all JavaScript required for the entire site—including the 3D library for the product page—even if it isn't needed. This will delay how quickly the page will become interactive on the first load, for a page and features that the user doesn't need yetmdash;and might never need if they don't visit that part of the site.
An alternative to bundling might be to split your application up into several chunks, pages or components. Tools like Next.js will do this for you automatically to mitigate the cost of JavaScript in large sites, but the developer doesn't get much control over this.
Recent tools like Astro and Slinkity take this a step further, allowing the developer to specify when a component should load. In many cases few users will ever scroll down to see a component at the bottom of a long page. We may want to load a component like that only when a user scrolls down far enough to be likely to view it.
Custom implementations have existed for a while, but these tools are the first I've seen to bring it to modern component-based JS libraries.
Back to Alpine, when they declare components with Alpine.data()
people often bundle them into a single JS file. Alpine needs to register components before it runs, meaning the easiest solution is a single bundle and we definitely don't have the fine control like with Astro/Slinkity. This is back to our issue with loading code that might never be used.
I found that as I worked on Alpine sites the bundle got larger and larger, often with components that were rarely visited or used. One of my sites was 120kB extra for a fancy animation on the bottom of a landing page, that was seen by 0.2% of visitors. This is the easy path with JavaScript frameworks and Alpine, loading that component when it was needed would be tricky and might mean abandoning Alpine.
One way we can handle loading components asynchronously in Alpine is by taking control of when Alpine runs components. When it starts Alpine scans through the entire DOM and finds elements with the x-data
attribute and runs them. If we rename x-data
to something else in our code, then Alpine won't see and run it.
We'd need to consider the other Alpine attributes like x-show
, :class
and @click
and a handful of other Alpine functionalities.
Once we've renamed those attributes and Alpine has started, we can control when we add the components again. We load the component when we'd like to based on certain conditions, use Alpine.data()
to prepare it and then rename the attributes back again. Alpine will pick up on the change, see the component and run it as normal.
With that we have a lightweight way to load components on-demand!
That's a minimal setup that does the basics, but we could go a lot further to add different loading strategies and to support the standard Alpine syntax. I've done that work and released it as a library called Async Alpine!
I've written another post focusing on it more—Async Alpine — Asynchronous Alpine component loading—and you can find more info about Async Alpine on GitHub
It came out of wanting Astro/Slinkity style loading for Alpine components, and I've been working on it the past couple of months. With more control over component loading I could build faster, more efficient websites without changing the syntax of a library I was familiar with. It has advanced loading options including immediately, on idle, when visible, using a media query, DOM events or any combination of those.
It's still in development and will need more testing before I'd consider it stable, but I've used it on several production websites with great success. If you're familiar with Alpine I'd encourage you to give it a try and see how it works for you!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Code Splitting in Alpine.js appeared first on alistairshepherd.uk.
]]>In response to Competition and Markets Authority’s Mobile browsers and cloud gaming MIR consultation
I am a Front End Web Developer resident in the UK, working for the London-based web agency Series Eight. This response is my personal concerns and comments rather than the position of any of my clients or employers, past or present. Series Eight is a website design and development agency that builds award-winning eCommerce and marketing websites for companies and brands within the UK. As a web developer at Series Eight I work with browsers and websites extensively and my comments come from my experience developing a large number of websites and web apps.
I have previously submitted my experiences and thoughts—specifically in relation to the monopoly of the Webkit browser engine on Apple's iOS—as a response to the Competition and Markets Authority’s Mobile ecosystems market study. My response is available as a PDF on gov.uk
In my previous response I specifically focus on the difficulties I and other developers face due to the lack of browser competition on iOS. Safari is a competant browser but in my experience has a large number of bugs and quirks that are difficult to deal with, and cost me and companies I have worked for a significant amount of time on every project to diagnose and fix. This cost is passed on to the companies and clients I work with, and due to a lack of any competition within the browser space on iOS Apple has little motivation to address these issues. In fact, due to the money Apple makes from native app development, their motivation may be in making web development more challenging on iOS to stifle a competitor platform.
As raised in the CMA's Mobile ecosystems market study, Google also engages in and exploits their entrenched mobile browser monopoly on Android. By utilising their position as the dominant browser, search engine, mobile operating system and email provider, Google is able to ignore the interests of consumers in mobile browser in favour of their own interests. This comes in the form of Google products requiring or 'suggesting' the use of Google Chrome to increase their browser market share, and features and functionalities added to Chrome to support their business interests elsewhere including in Search and Advertising, at the cost of consumer interests like privacy.
Despite Android supporting browser engine diversity and choice on paper, other browsers do not face the same preferential treatment as Google Chrome. Many Google apps will ignore the default browser and instead use Chrome, some APIs and functionalities are only available in Chrome, and features like Progressive Web Apps and Trusted Web Apps in some cases require Google Chrome.
I have spent hours diagnosing issues with websites to find the user reporting them was not aware that Google Search had ignored their default browser and their normal settings were not available, and development of PWAs and TWAs are more challenging when cross-browser development is not possible.
The inability to compete with native apps using Progressive Web Apps fully—particularly on iOS—also has a big impact on my work and the businesses I have worked with. Progressive Web Apps are extremely accessible for development, allowing for the creation of a simple app in a fraction of the time and complexity of a native app. This is fantastic for allowing smaller agencies and businesses to innovate on the web and on mobile devices and to reach consumers. However the poor support for PWA features by Safari and by not allowing them in the App Store, Apple forces app development to be difficult, time consuming and extremely expensive. I have spoken with many companies who would have liked an app to compete with their larger competitors but are unable to afford the huge costs in developing a native app.
I consider the analysis of the features of concern and the reference tests in regards to mobile browsers is correct and accurately reflects my experiences and concerns. Mobile browsers are a key part of participation in modern society and the market being fair and competitive for consumers is vital.
I believe that the opening of mobile browsers will be an almost universal benefit. As we have seen in the industry of web browsers in the past during the mid-2000's for Internet Explorer, a monopoly without reasonable competitor browsers causes progress and standards in web development to fall and negatively impacts developers and consumers significantly. By opening this up consumers can make meaningful choices to respect their preferences towards certain features, companies and privacy. Developers will be able to rely on more competitive browser engines prioritising bugs, security and interopability. In this manner I believe it will also benefit Safari and Chrome, motivating their teams to push for further improvement and innovation.
I broadly agree with the remedies presenting by the CMA in the Mobile browsers and cloud gaming MIR consultation in regards to mobile browsers. I think that removing Apple's restrictions on browser engine diversity and mandating equal functionality for browsers is extremely important to restore competition in the mobile browser ecosystem.
In regards to the suggestion for requiring choice screens, I don't believe that they would be an effective or required remedy. I am not sure they are particularly effective at preventing users from choosing the default browser, and it simply moves the goalposts on what browsers are allowed to 'compete'. When Microsoft implemented a browser choice screen in Windows I saw it pose more confusion to users than help. I am of the belief that a market where all other things are equal besides pre-installation solves the issue without the need for choice screens. I believe that the prevelance of Chrome on MacOS and Windows devices show that good browsers can easily overcome pre-installation.
An additional remedy I think is important to improving competition of mobile devices is in regards to ensuring PWAs are treated equally to native apps. I believe that Apple particularly should be required to allow PWA submission to their App Store so PWAs can compete effectively with native apps. In my experience this would significantly open up access to app development to a huge number of developers and businesses that could not afford native app development or the management of multiple platforms.
It is also extremely important that browser choice is always respected, especially when alternative browsers are available. Google, Apple, and third-parties including Meta are known to ignore default browser choice in some circumstances. A requirement for diversity is not effective when apps can ignore a user's preference, so I think it is important that platforms and apps be required to respect the default browser as a potential additional remedy.
I believe the remedies as proposed and I have mentioned would be sufficient and effective at allowing for competition in mobile browsers and in addressing the entrenched market power of Apple and Google in the mobile browser ecosystem. I have less experience with the cloud gaming industry, however as a consumer the concerns and proposed remedies do seem accurate and appropriate to ensure competition in this area.
My comments are of a personal capacity and do not represent any organisation I work for. I would like my response to be attributed to me by name, and you have my permission to publish or quote from this document with or without attribution.
Best regards,
Alistair Shepherd
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post My comments to Competition and Markets Authority on mobile browser competition appeared first on alistairshepherd.uk.
]]>This is a wrap-up of my year, which has been extremely busy for me. It's mostly for myself to look back on, but if you're interested then great!
I didn't used to 'get' round-up/wrap-up posts, but over the past year I've got into journalling and taking note of what I've been doing so I can look back on it. A post like this suddenly makes sense now I realise it's primarily for me to look back on—not to show off!
The big thing for me was speaking at the State of the Browser conference in London. There I gave a talk called "Creative web: Building dynamic websites for work and play". It was my first experience speaking at a conference, and it was absolutely fantastic. The organisers, other speakers and attendees were so welcoming, friendly and helpful and I owe a massive thank you to all of them.
The recording will be available soon, but my slides are public now if you're interested! I'll also be giving an improved version again in future so watch out for that!
A particular shout-out to Bruce Lawson for his kind and touching advice on where to aim if I needed to projectile vomit. ❤️
It has been a great year for working at Series Eight, I've been involved in some awesome projects, interesting technical challenges, and some really exciting changes in the company. We've got a fantastic team that I am loving working with, and I finally got to meet them for our team trip in Portugal back in September (the disadvantage of working remotely)!
One thing I'm particularly thrilled with about our work over 2022 is how we're really taking the accessibility of our sites seriously. It's a slow process updating old sites and putting a greater priority on it in our processes, but change is coming and I'm so happy to be working somewhere that cares.
At the end of this year I was also promoted to Lead Developer of the team, starting when we return in January! It'll be a new challenge doing a bit more management and admin rather than just coding, but I'm looking forward to working to support the team.
I've done a bit more open-source and side projects this year than previous, particularly working on my project Async Alpine. Check that out if you haven't already, it's a library for Alpine.js that supercharges component loading and I am so proud of it. Thanks to the SeriesEight team and GitHub contributors for suggestions and feedback!
This year I also built Cead Consent and Sailwind. Cead consent is a GDPR/cookie/tracking consent manager that handles enabling/disabling tracking scripts and pixels for your users. Sailwind is a fluid spacing utility generator for TailwindCSS, I'm pretty excited about the potential it offers for translating designs. Both are open-source and I've got some plans for them both coming up!
Recently with my growing distaste for Twitter I've started helping with and contributing towards Tweetback, a tool to host your tweet archive. I've made a handful fo contributions so far but I'm very keen to continue to help out and make it easier for people to own their content.
In speaking, other than at State of the Browser I've spoken about Image CDNs on quite a few occasions. I did several meet-ups earlier in the year and even a Twitch stream! A big thanks to everyone who hosted events I've spoken at over 2022.
Finally, I've written some blog posts I'm really happy with! Here's the list in reverse order:
Like many people—and the year in general—2022 was a bit of a rollercoaster for me. There's been some rough spots but this post is focusing on the positives!
Although Covid is of course still ongoing, with some care I've managed to meet up with friends I mostly hadn't seen since pre-pandemic which has been the highlight of my year. We went on a holiday to Dubrovnik, Croatia together and when this post is published I'm hosting people for a new years party.
Earlier this year I made myself a goal to climb every hill in the Pentlands over 400m, a range of hills near Edinburgh. There's 52 in total and I've managed to get through 24 this year. Those are the easy ones though, getting many of the remaining ones will end up being a bit tricky!
Despite not going abroad I've managed to get out skiing a fair bit this year! I went up to Glenshee several times at the end of last season and managed a day before christmas this season. So excited to get up there more this year!
This is the obligatory note that I've moved from Twitter to Mastodon which has been a breath of fresh air. I'm really enjoying the 'Fedi-verse', if you haven't switched already come join us!
Also I'm dying my hair blue now, why did no one tell me it was this easy before?!
I've watched, played, listened to and read so much great stuff this year I want to share!
It's been a brilliant year for music for me! These are releases I've loved that came out in 2022:
I played a bunch of short indie games this year that were astounding. I'll save the commentary for only the top few but I'd love to speak about any with you if you'd like! Not all of them were released in 2022 but I played them all for the first time this year:
I mostly consume books in the form of audiobooks, but I'll put them in one list with physical books! The top couple are work-related, the rest my normal reading. Again, not just books that came out this year but just ones I read this year.
Only a couple new ones so here's all of the podcasts I've been listening to, new and old:
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post My 2022 round-up appeared first on alistairshepherd.uk.
]]>I had already been considering getting a new phone for a bit however, as my trusty 9T certainly had it's problems. I'll get onto some of the software issues later, but also the pop-up selfie camera only popped up about 30% of the time (thanks to a Snowboarding accident). I regularly got strange looks as my phone whirred unhealthily and I shook it upside-down to try to get the camera out.
So at this point I was looking for a new smartphone, and for anyone who isn't a big fan of either Apple or Google as I am you'll be familiar with the dilemma. Android is a privacy and security hell-hole where you're expected to get a new phone almost every half an hour. iOS is possible slightly better for privacy but expects you to sell at least 3 organs to get the cash to buy a phone. Also fuck the browser ban.
Thanks to Neil Brown, I found GrapheneOS, a mobile OS which promises to solve all my problems. I've since been using it for a couple months on a Google Pixel 6 and it's been fantastic. This post is about my experience with it to share the love!
So GrapheneOS is Android... kinda. It's basically the open-source version of Android but with loads of added security and privacy functionality. It's open-source, officially supports all the most recent Google Pixel devices and can be fairly easily installed to replace the default Pixel Android.
By default it doesn't include Google apps or services for security and privacy reasons, so Google doesn't have constant access to your device for their nefarious purposes. It does however support android apps, and you can install Google services in a limited way to ensure maximum compatibility while ensuring it doesn't have full control/access.
It sounded fantastic, Android app compatibility including for apps that need Google services, whilst a priority on security and privacy. That's exactly what I want from my phone.
So on my previous couple of phones I tried to do something like what GrapheneOS promises. I had a de-googled version of Android, with the microg project adding support for apps that needed Google services.
Unfortunately in practice it didn't work great. Props to all devs involved in making it happen, but so many apps didn't work on it. Some intentionally: "Your phone is insecure, fuck off", some just crashing.
I had to carry a second phone with me that had 'normal' Android on it for my banking apps, most takeaway/food delivery apps, on holidays I was relying on others to order an Uber for me, and mobile gaming was pretty much completely out. There were often workarounds and alternatives for some of these but it was regularly a huge effort just to install an app.
Graphene has a very fancy web-based UI for installing itself onto devices and extremely good instructions and documentation. It was probably the easiest OS install/flash I've ever dealt with, mobile or otherwise.
Compatibility with different computers seems a bit iffy, I couldn't get it working on my Windows 11 desktop (driver issues probably) and work Macbook (USB C-C cable not being right maybe?) but it worked fine on my personal Windows 10 laptop. If you have issues try different computers you have access to and different cables.
You basically just go through all the steps, doing what you're told and clicking the buttons when prompted. I did have some cases where it seemed to stop at random points so I had to re-do some steps when they didn't finish but eventually they all worked and I got it installed.
Set-up was pretty standard Android, minus all of Google's shitty questions about tracking and such. Overall very easy to get installed and set up.
GrapheneOS has full separate user profiles and encourages users to utilise these to isolate different apps from each other to increase privacy and security. On Android every app can see what other apps you have installed on that user and potentially interact with them, so if you split your apps across different users it limits how much each app knows and can potentially affect.
It also allows you to more easily control what apps are running when. If a user is not 'logged in', none of the apps in it can run in the background.
I got really confused about how to set this up at first. I understood the concept but didn't really get the details about what was being suggested? Looking at what other people did confused me further as it varied so much. Some people would have an 'Instagram' user, others would not use the default user at all, I didn't really get what extent I should be using user accounts.
Once I played with it a bit it started to make more sense. I'd suggest thinking about them as different 'contexts'. I ended up with this user structure:
I really like GrapheneOS. Quite often software intended for people who are security/privacy minded compromises a lot on usability but the user experience of Graphene is fantastic. It has a handful of issues—more on that in a sec—but none are major enough to override all of the problems it solves with the mobile OS market.
Some of the things I love about it:
In terms of battery life and performance, it's been pretty much exactly as normal Android when I tried it before installing Graphene. Performance seems the exact same, and the battery life might even be a bit better with less tracking and more control over background apps.
There's not many issues, but as I said it's not perfect.
The most notable for me is that Facebook Messenger calls often don't come through to me. Even if I have the app open, someone will call me and nothing even comes up until the 'didn't answer' message appears. Messages work and I can call people fine, but until I come up with a solution I have to occasionally check my missed calls. It's not a terrible arrangement, my friends and family know to phone me if it's urgent.
I've also found a handful of apps that don't work, with Google Play Services or without. So far there's been three, all random games from the Play Store that crash on startup. None I'm that fussed about yet so I haven't done any debugging. It is a very small number compared to the total number of apps that work great.
I normally have an always-on VPN and have occasionally had issues with connecting to the internet with it on. This might be my Wireguard VPN client but I didn't have any problems on my last phone. Toggling it off and on tends to sort it out.
This post is mostly about GrapheneOS, as the software is what I really care about. If you're planning on buying a phone for Graphene though, I'll mention my experience with the Pixel 6.
I bought a refurbished device rather than new for climate reasons and to not give money to Google directly. After the first device being a store model stolen from an o2 store and unusable, the second one was in perfect nick and as new.
It's a bit big, I think I maybe should have gone for the Pixel 6a as that's slightly smaller, but I manage it okay with fairly big hands. It is slippy so I'd suggest getting a skin or case for it. I've got a Spigen Liquid Air which isn't too thick and offers a bit of protection whilst still feeling pretty premium. I don't like the shape of the back with the big raised camera, but the case makes that a bit less major.
The camera quality is good, I'm not sure it's up to all the hype from the many adverts I've seen but it's better than my previous phones.
The fingerprint sensor is pretty crummy unfortunately, the worst I've used before. It works, but fairly often I have to try multiple times to get in. I originally wondered if this was Graphene, but from some searching it seems to be the phone hardware.
Overall, it's a decent phone for a decent price. A good one to go for if you're buying a new phone for Graphene. I wouldn't recommend it without Graphene though, it's not worth the Google spyware.
I'm a big fan of GrapheneOS and it's pretty much nailed the perfect mobile OS for me at the moment. I've got more control over my phone, how I use it, how apps run on it, and who it reports back to than I ever have before.
I would highly recommend it for anyone considering a new phone, especially if you're considering privacy, security or control over your device.
Feel free to message me or email me if you have any questions about it! I'd be happy to help. 👋
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post GrapheneOS as my daily-driver mobile OS appeared first on alistairshepherd.uk.
]]>I track my routes for posterity so I know how long particular walks have taken for if I come back to them again some time. I don't really do much else with them though, and with an itch to build a new site for myself I wondered if I could do something with the route files.
Leaflet is a fantastic open source JavaScript library for displaying and interacting with Open Street Maps on the web. Think of it like the Google Maps JS API but totally free, lacking the horrendous tracking, and more customisable.
Leaflet is pretty easy to get started with, I'm going to use the CDN url:
In the HTML I'm loading the script and CSS for Leaflet and have an HTML element for the rendered map. In the CSS I'm just displaying that map fullscreen.
In the JavaScript we initialise a map on the #map
element and set the starting co-ordinates and zoom level. We then add a Tilelayer, which is basically the images that make up the map, along with credit details and then add it to the map.
There we go! We've got a simple map.
I can export my routes from OsmAnd in the format GPX. GPX is short for "GPS Exchange", and it's an open standard that's used by lots of different GPS devices and programs for sharing data about routes. It's pretty standard and many GPS and tracking apps will be able to export to it.
I've exported a GPX file to try, and the first thing I notice is it's over 600kB! I'm pretty particular about web performance but I think it's fair to say that is far too large if I can do anything about it.
Thankfully there are various tools to help reduce that filesize. My route tracker is set up to log every 5 seconds so I guess over a 4 hour walk that is a lot of data points, but in reality I don't need nearly that many to show the rough route online. I found gpx studio to be handy for editing GPX files, it allows you to import and export, has some handy tools and importantly allows you to view the route on a map. I used the "Reduce number of tracking points" option (the two diagonal arrows) to reduce the number of tracking points from 2,300 to 390 which reduced the file size a lot. I chose that number by cranking it down until I started to lose some of the fidelity I wanted in the route line.
After reducing the points we got down to 36kB. A lot better!
The next step was getting the route onto my map. It turns out there's a plugin called leaflet-gpx that makes that really easy. I upload my GPX file somewhere, include the plugin JS in a script tag and can then use new L.GPX
to create the route:
I've also got an event listener after this to fit the map to the route when it's loaded. This means I can remove the .setView()
when creating the map and not worry about the latitude, longitude or zoom level, Leaflet will handle it for me.
When loading the plugin over a CDN I found the included icons didn't load by default, and I had to add the below code to make them work:
new L.GPX('...', {
async: true,
marker_options: {
startIconUrl: 'https://cdn.jsdelivr.net/npm/leaflet-gpx@1.7.0/pin-icon-start.png',
endIconUrl: 'https://cdn.jsdelivr.net/npm/leaflet-gpx@1.7.0/pin-icon-end.png',
shadowUrl: 'https://cdn.jsdelivr.net/npm/leaflet-gpx@1.7.0/pin-shadow.png'
}
})
This is fantastic and is almost exactly what I'm looking for! How the map looks isn't ideal though, the default Open Street Map tiles aren't very relevant for hillwalking, prioritising roads and the golf course in this example. Ideally I'd have it highlight the different terrain, walking paths, fences, and feature contour lines at the very least!
Earlier we added a Tilelayer to Leaflet, which I said was basically the images that made up the map. In that example we used the tiles provided by OpenStreetMap, the, 'official' and recommended default of Leaflet. That's just one option however, and anyone can use the data from OpenStreetMap to make their own map tiles looking however they'd like.
Turns out there's a bunch of different Tilelayers out there for all sorts of different purposes, and Leaflet supports lots of them! There's a mixture of open-source, free, and licensed tile providers, with many listed in the Open Street Map Wiki. By finding a tile provider that is more focused on hiking and outdoor pursuits I could get a map that was a lot more suitable for my purpose.
I managed to find Thunderforest, which is a company that offers a range of maps, and crucially an "Outdoors" map that has contour lines, forests, walking trails and hillshading—perfect for my hillwalking maps! They run their maps as a commercial product with API keys, but have a generous free "Hobby Project" tier that is perfect for my use.
We can switch it out really easily by changing the URL used in the L.tileLayer
function. Thunderforest also requires us to update the attribution in the bottom-right corner of that map so I've done that here too.
new L.tileLayer(
'https://tile.thunderforest.com/outdoors/{z}/{x}/{y}.png?apikey=xxx',
{ attribution: 'Maps © <a href="https://www.thunderforest.com">Thunderforest</a>, Data © <a href="https://www.openstreetmap.org/copyright">OpenStreetMap contributors</a>' }
).addTo(map);
Put that all together and we have a handy little widget that allows me to view and display my walking routes on the web!
This was a fun little experiment that was to test the waters for a potential future project. If you're interested to see what I do with this (no prizes for guessing) then follow me on Mastodon!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Displaying Hillwalking routes on the web with GPX files and Leaflet appeared first on alistairshepherd.uk.
]]>You can find me at accudio@mastodon.scot and if you're not on the Fediverse yet come join us!
Now onto the actual topic of this post, trying out various Mastodon or Fediverse apps for Android! I've been using Twidere since I started, originally because it supported both Twitter and Mastodon and therefore allowed me to transition slowly. I removed Twitter after a few weeks but kept the app as it's decent. Unfortunately however lately I've had it crashing upon seeing particular posts. That means I have to clear the cache and data before re-using it — not ideal!
Hence my search for a new app, but first...
I used Twitter — and use Mastodon — in a very particular way. Normally I check in maybe 2-3 times a day, depending on how busy I am. When I start my app, I want it to be exactly where I left off when I last used it. I then go through 'catching up' on what's been posted since I last checked, and once I'm done I finish and get back to work/play. When I next check it I continue from where I finished.
I have a pretty curated feed so this doesn't tend to take me very long and means I don't miss anything from the folks I really care about. I use mutes, hide retweets and separate RSS feeds to keep track of others in a less instant form.
This way of using Twitter and Mastodon is ideal for me, and is the most healthy relationship I've had with Social Media as once I'm 'caught up' there's no more scrolling to do!
I have just one 'must-have' for a Mastodon app (beyond the very basics of loading posts) and from how I use it that'll probably be pretty clear! I need the app to keep track where I am in my feed, no exceptions. I don't mind if loading more posts is automatic or manual, but it has to happen around my scroll position so I don't lose track of where I am. Any other features are a bonus!
First, thank you so much to everyone who gave me recommendations when I asked last week, most of these suggestions came from those recommendations.
If there are any good ones you know of that I haven't mentioned then please let me know!
Tusky was the most common recommendation I got and it's pretty good! Modern design, very quick, great support for Mastodon features like Polls, editing, displaying of alt text, all of which is beyond Twidere.
Unfortunately, when I'm scrolling up and tap "Load More" it pins me to the top of the new posts rather than the bottom, requiring me to scroll back down to find where I was beforehand. I tried to stick with it for a bit because I really like the other features but it just frustrated me unfortunately.
Husky is a fork of Tusky and has pretty similar features, mainly focused around Pleroma servers. I don't use a Pleroma server so most of these aren't too major for me as a Mastodon user.
Sadly it faces the same issue as Tusky with loading new posts
I like the design and feel of the official Mastodon app but compared to Tusky and Husky the comparitive lack of customisation kinda sucks. That said, if it can load posts in the way I want it to then that's nothing...
Once again, no it can't. Tapping "Load more posts" pins you to the top (newest) of the loaded posts rather than the bottom (oldest). Another fail :(
Megalodon is a fork of the official Mastodon app that adds support for some pretty handy features like a Federated timeline, more control over posting and image descriptiong viewing. Moshidon is a fork of Megalodon which also adds support for remote local timelines.
Both are cool but suffer from the same issue as the official app when it comes to loading posts.
I don't necessarily need an app really, I'm perfectly happy to use a website or Progressive Web App instead if that works better. That said, thanks to Gogle being anti-competitive, monopolistic jerks my phone doesn't support Progressive Web Apps properly. So I'm restricted to the Mastodon website of my server as viewed through my browser, no PWA install superpowers.
And this ends up worse than the apps unfortunately! If I load it up having not used it in a few hours the page loads fresh and I'm placed at the top of my feed with the latest post. Absolutely no can do unfortunately.
I can't test how it works as a proper PWA, but giving it the benefit of the doubt I'm blaming this one on Google.
Elk is a third-party website and PWA for a few different Fediverse servers that is pretty great for customisation and accessibility.
It's another casualty of Google it seems, loaded at the latest post losing track of where I was. It's a great website though so I might switch my desktop use to it, but it doesn't work for my mobile use.
FediLab is the best shot yet. It seems a bit rougher than the other apps but works extremely well, with most of the customisation I'd want and loads of super handy features like showing Alt text, automatic privacy-friendly translation, and support for alternative frontends for sites like YouTube, Twitter and Instagram.
This is the first app that copes with my style of browsing the feed! It keeps track of my place in the feed and when I scroll up there's a "Fetch more messages" button that lets me choose whether I want to be placed at the newest first or the oldest — ideal!
I've been using FediLab for about a week, and it's sadly not the success story I hoped it would be. In theory it handles keeping track of scroll position fine but in practice it's not perfect. One in every 3 or 4 opens of the app it will misplace me by a certain amount up or down, making me spend time working out where I was before. Sometimes I think it's just not saving my position during an entire session and the next time I'll be back to where I started.
I really want to stick with it because it's a really solid app and nice experience but after a week of use I don't think I can.
So that leaves me... exactly where I started? Even with the crashing and all of the Mastodon features it's missing, Twidere meets my requirement of how I use the app the best. It's tracking of scroll position is rock-solid and that part of it hasn't had any issues at ll whilst I've been using it.
In terms of the crashing, it seems to happen about once every 4-5 days on average, and I haven't put the time into working out exactly what's causing it. To fix it, I need to clear the data and cache of the app and restart it. I'm still logged in at that point, but it's lost all of my customisation. However, there's a way to export your settings to a file so I can re-import that after it's crashed to fairly quickly get back to where I was.
It's a faff, and means when it does crash I lose where I was but that's a lot less frequently for any of the other apps.
I'll miss all of the features of the others, when I want to vote in a poll for example I have to open the post in my browser, copy the URL and paste it into my home server. A bloody pain, but I don't use polls that often. I think the thing that I miss the most is not being able to see an image's alt text within the app, making it hard to keep my my policy of "No alt no boost".
So there we go, that's a summary of how I switched my Mastodon app usage from Twidere to Twidere! If I've missed a setting on any of the apps that would make them work better, or you have any other suggestions then please let me know!
And once again, join me on the Fediverse at accudio@mastodon.scot, and if you aren't on the Fediverse then come join us! joinmastodon.org is a great place to get started.
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Searching for a Mastodon app for Android appeared first on alistairshepherd.uk.
]]>Turns out that Vercel makes it super easy to set up a simple GeoIP service for yourself!
If you just want the code you can find the repo at github.com/Accudio/vercel-geoip and demo at accudio-geoip.vercel.app. You can fork that repository and deploy it to your own Vercel account to use yourself!
I have also published a very similar post (almost identical to be honest, it's mostly copied) about how to do the same with Netlify.
Read on for a deeper explanation, and let me know if you have any thoughts or issues!
For a couple projects I'm currently working on, recently I had need for a Geolocation API. Nothing too major, just getting a users very rough location based on their IP address, to tailor their default experience of language, currency, or laws.
There are a TON of Geolocation API services with various pricing, trustworthiness and privacy/tracking policies. I looked at a few, but the per-lookup pricing and lack of certainty around trusting a third-party with our users' IP addresses was a bit of a deterrent.
If you haven't heard of Vercel before, it's a hosting company that specialises in JAMStack sites, similar to Netlify. It's a good platform for static sites, JavaScript-based frameworks and serverless/edge functions.
It's the serverless and edge functions that are the key to this setup. Serverless and edge functions allow us to run a node.js script on each request, responding dynamically. Serverless functions run on centralised servers (they're pretty badly named!), Edge functions are a bit more restrictive and run directly on the CDN nodes allowing for a potentially faster or lighter response.
These functions can be combined with Vercel's HTTP headers with geolocation information. We can send that data back on the request in a JSON format, and then use that within our front-end JavaScript.
As most of the examples of Vercel's functions rely on Next.js, it's a bit tricky to find how to set up functions without it. For my own later reference and to avoid you having to go through the same research, I'm going through the full process!
First we need to initialise our repo, npm project and install the Vercel packages.
mkdir vercel-geoip && cd vercel-geoip
git init
npm init -y
npm i -D vercel
npm i @vercel/edge
In Vercel projects functions are placed within an api/
directory, so let's create an api/index.js
file. This would run on any requests to /api/
. Within it, we're going to put the very basics of a edge function that has a basic text response:
// api/index.js
export const config = {
// Specify this function as an edge function rather than a serverless function
runtime: "edge"
};
// We export the function that runs on each request, which receives the `request`
// parameter with data about the current request. We'll use this later
export default function (request) {
// respond to the request with the content "hello world!"
return new Response('hello world!')
}
To test our function, we can run npx vercel dev
to run the Vercel development server. This will ask you to link the project to your Vercel account and some details about the project. You can leave those details as default.
Now, if you visit the dev URL in your browser and add /api
— probably localhost:5000/api
you should see "hello world!".
Now let's amend our index.js
file to include the Geolocation bits:
// api/index.js
// Import the geolocation and ipAddress helpers
import { geolocation, ipAddress } from "@vercel/edge";
export const config = {
runtime: "edge",
};
export default function (request) {
// The geolocation helper pulls out the geoIP headers from the
const geo = geolocation(request) || {};
// The IP helper does the same function for the user's IP address
const ip = ipAddress(request) || null
// Output the Geolocation data and IP address as a JSON object, and
// set the content type to make it easier to handle when requested
return new Response(
JSON.stringify({
...geo,
ip,
}),
{
headers: { "content-type": "application/json" },
}
);
}
Now this won't work in the dev server as Vercel doesn't inject the geolocation headers there, but if you open the function at least it shouldn't error. You can get a preview deployment to test it on the Vercel servers by running npx vercel
.
If you visit the /api
route on your preview URL you'll get the Geolocation data of your IP address! Neat!
If we try to call this on a different website with JavaScript, we're going to run into CORS issues. CORS — Cross Origin Resource Sharing — is a way browsers prevent websites from using a browser to access content they shouldn't have access to, like resources from a local network. This means as things currently stands, a browser won't let us access the content from our API request with fetch
.
To allow us to use the API within JavaScript in a browser, we need to tell the browser to allow CORS. We can do this by adding some HTTP Headers, via a vercel.json
config file in root of our project:
// vercel.json
{
"headers": [
{
"source": "(.*)",
"headers": [
{ "key": "Access-Control-Allow-Origin", "value": "*" },
{ "key": "Access-Control-Allow-Methods", "value": "GET,OPTIONS" }
]
}
]
}
This is taken from Vercel's "How can I enable CORS on Vercel?" guide. Since this is a relatively straightforward API we don't really need a lot of the parameters in that article, so I've simplified it to allowing all origins, and only the GET and OPTIONS methods.
There is one thing to note with the above code however, the Access-Control-Allow-Origin
header allows all origins to make a request to the API. In most cases that might be okay, but you may want to prevent other sites from using your API, especially if you start hitting Vercel's usage limits.
You can whitelist a single origin by adding it to the Access-Control-Allow-Origin
header instead of *
. You could also include the CORS headers within the edge function depending on the requesting Origin for multiple origins. I haven't run into that problem yet though, so consider that a further exercise for the reader!
The final touch is a rewrite so we can hit our API at the root URL /
, instead of having to include api/
on every request. With Vercel we can do that with a few more lines to vercel.json
:
// vercel.json
{
"headers": [],
"rewrites": [
{ "source": "/", "destination": "/api/" }
]
}
We can deploy the API to Vercel with npx vercel --prod
, or link the project via the Vercel website to a Git repo on GitHub, GitLab or similar. Access the API at the Vercel URL, for example accudio-geoip.vercel.app
and there we go!
This is the result I get when visiting that URL (IP obfuscated for privacy):
{
"city":"Loughborough",
"country":"GB",
"countryRegion":"ENG",
"region":"lhr1",
"latitude":"52.7681",
"longitude":"-1.2026",
"ip":"XX.XX.XX.X"
}
It's definitely not perfect, to start I'm in Edinburgh, Scotland not Loughborough, England! City and Country Region should maybe be taken with a pinch of salt, but that's something I run into with GeoIP systems all over the web so it's clearly not just Vercel. (interestingly, my Netlify post had similar but slightly different results)
For the purposes of country though it's accurate, and the City and Region may be helpful to set a default that a user can later change.
We can use this within JavaScript on another website like so, but keep in mind you may need to switch from using await
to .then()
depending on your setup.
const geoRequest = await fetch('https://accudio-geoip.vercel.app')
const geo = await geoRequest.json()
console.log(geo.country)
// GB
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Simple, cheap GeoIP API using Vercel Edge functions appeared first on alistairshepherd.uk.
]]>Turns out that Netlify makes it super easy to set up a simple GeoIP service for yourself!
If you just want the code you can find the repo at github.com/Accudio/netlify-geoip and demo at accudio-geoip.netlify.app. You can fork that repository and deploy it to your own Netlify account to use yourself!
I have also published a very similar post (almost identical to be honest, it's mostly copied) about how to do the same with Vercel.
Read on for a deeper explanation, and let me know if you have any thoughts or issues!
For a couple projects I'm currently working on, recently I had need for a Geolocation API. Nothing too major, just getting a users very rough location based on their IP address, to tailor their default experience of language, currency, or laws.
There are a TON of Geolocation API services with various pricing, trustworthiness and privacy/tracking policies. I looked at a few, but the per-lookup pricing and lack of certainty around trusting a third-party with our users' IP addresses was a bit of a deterrent.
If you haven't heard of Netlify before, it's a hosting company that specialises in JAMStack sites. I use it for this website and a lot of my personal projects, and it's a great platform for static sites, JavaScript-based frameworks and serverless/edge functions.
It's the serverless and edge functions that are the key to this setup. Serverless and edge functions allow us to run a node.js script on each request, responding dynamically. Serverless functions run on centralised servers (they're pretty badly named!), Edge functions are a bit more restrictive and run directly on the CDN nodes allowing for a potentially faster or lighter response.
These functions can be combined with Netlify's context
object for geolocation information. We can send that data back on the request in a JSON format, and then use that within our front-end JavaScript.
For my own later reference and potentially yours, I'm going through the full process of setting up a simple Edge function on Netlify!
First we need to initialise our repo, npm project and install the Netlify CLI for local development.
mkdir netlify-geoip && cd netlify-geoip
git init
npm init -y
npm install netlify-cli -g
In Netlify projects edge functions are placed within the netlify/edge-functions/
directory by default, so let's create an netlify/edge-functions/geoip.js
. Within it, we're going to put the very basics of a edge function that has a text response, and specify Netlify should serve it as the root request /
:
// netlify/edge-functions/geoip.js
// Specify that this function should run on the path `/`
export const config = { path: '/' }
// We export the function that runs on each request
export default () => {
// Respond to the request with the content "hello world!"
new Response('hello world!')
}
To test our function, we can run netlify dev
to run the Netlify development server. Now, if you visit the dev URL in your browser — probably localhost:8888
you should see "hello world!".
Now let's amend our geoip.js
file to include the Geolocation bits:
// netlify/edge-functions/geoip.js
export const config = { path: '/' }
export default async (request, context) => {
// The context parameters includes details about the current request,
// including the geolocation information and client IP address
return Response.json({
...context.geo,
ip: context.ip
})
}
Once again we can test this with netlify dev
, you may need to restart the development server to get the latest changes. If you visit the preview URL you'll get the Geolocation data and your IP address in a JSON format! Neat!
If we try to call this on a different website with JavaScript, we're going to run into CORS issues. CORS — Cross Origin Resource Sharing — is a way browsers prevent websites from using a browser to access content they shouldn't have access to, like resources from a local network. This means as things currently stands, a browser won't let us access the content from our API request with fetch
.
To allow us to use the API within JavaScript in a browser, we need to tell the browser to allow CORS. We can do this by adding some HTTP Headers via the second argument of Response.json
:
export const config = { path: '/' }
export default async (request, context) => {
return Response.json(
{
...context.geo,
ip: context.ip
},
// Add a second parameter to `Response.json`
// where we can provide our CORS headers
{
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET,OPTIONS'
}
}
);
};
You could be more specific with your CORS headers, but for a simple API like ours this will do fine. These two lines allow all origins to access the API, and only the GET and OPTIONS methods.
That is one thing to note however, the Access-Control-Allow-Origin
header allows all origins to make a request to the API. In most cases that might be okay, but you may want to prevent other sites from using your API, especially if you start hitting Netlify's usage limits.
You can whitelist a single origin by adding it to the Access-Control-Allow-Origin
header instead of *
. For multiple origins you could also dynamically read the Origin
header and use that to allow or disallow a request. I haven't run into that problem yet though, so consider that a further exercise for the reader!
We can deploy the API to Netlify with netlify deploy --build --prod
, or link the project via the Netlify website to a Git repo on GitHub, GitLab or similar. Now access the API at your Netlify URL, for example accudio-geoip.netlify.app
and there we go!
This is the result I get when visiting that URL (IP obfuscated for privacy):
{
"city": "Newbury",
"country": { "code": "GB", "name": "United Kingdom" },
"subdivision": { "code": "ENG", "name": "England" },
"timezone": "Europe/London",
"latitude": 51.3195,
"longitude": -1.4146,
"ip": "XX.XX.XX.X"
}
It's definitely not perfect, to start I'm in Edinburgh, Scotland not Newbury, England! City and Subdivision should maybe be taken with a pinch of salt, but that's something I run into with GeoIP systems all over the web so it's clearly not just Netlify. (interestingly, my Vercel post had similar but slightly different results)
For the purposes of country though it's accurate, and the City and Subdivision may be helpful to set a default that a user can later change.
We can use this within JavaScript on another website like so, but keep in mind you may need to switch from using await
to .then()
depending on your setup.
const geoRequest = await fetch('https://accudio-geoip.netlify.app')
const geo = await geoRequest.json()
console.log(geo.country.code)
// GB
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Simple, cheap GeoIP API using Netlify Edge functions appeared first on alistairshepherd.uk.
]]>I used to be thrilled over everything new, but in the past decade or so it's just faster, smaller, higher pixel density. VR got kinda close but still feels a bit like strapping a phone to my face.
That's why I was so surprised when a new pair of earphones completely blew my fucking mind. Those earphones are the Aeropex by Aftershokz and they're a pair of fantastic bone conduction earphones. I don't do reviews but my amazing experience with these meant I had to share it.
Previously when on video calls at work I used a pair of open-backed headphones or a pair of fairly standard earbuds. Now I'm quite a loud person normally, and it turns out that when I talk with headphones/earbuds in I get even louder, almost shouting. Enough that when I starting having more calls my partner complained and a neighbour mentioned it. 😬
I'd heard of bone conduction earphones before and they seemed like a decent option. If you're not familiar with them, they have a couple of pads that sit against your temples and get sound into your brain by 'conducting' it through the bone. It then travels to your eardrum through the bone and you hear it as if it was normal sound coming through the air. This leaves your ears and ear canal clear for environmental noise and for your own voice, importantly for me.
TLDR: the Aeropex are amazing, blew my mind when I first started using them and would recommend them to everyone. They're comfortable, sound great, are easy to use and hold a decent battery. I've used them for about 4 months now, they've replaced all of the other earphones/headphones I had previously, and I don't think I'll ever go back to other headphones/earphones.
I bought them on discount for £90 from Amazon, the Aeropex no longer seem to be on Amazon but they're available for similar prices elsewhere and the OpenRun on Amazon looks pretty similar.
It was in very professional and nice packaging, although a bit on the over-engineered side. Standard sort of thing for tech products. The box included the earphones themselves — pre-charged to a pretty full battery — a silicone carrying case I'll never use, disposable foam earbuds (I guess for if you actually do want to block out noise?) and two charging cables.
The earphones themselves are pretty small and light, so don't have a USB or whatever port and instead has a custom magnetic two-contact charging connection. The charging cable is USB A on one end and just snaps in place when close to the earphones. Having a second cable is a nice touch, I wouldn't want to lose one and not be able to easily get a replacement.
They pair via Bluetooth, a pretty standard 'hold the button til you hear the beep' sort of setup. They seem to be able to pair with any number of devices — so far I'm at 7 — and to switch you just disconnect from one device and reconnect to another. They'll automatically connect to the last device used which can sometimes be annoying — my laptop on sleep the other side of the house will connect to it instead of my phone right next to me. The connection is really solid and has never really had issues except when the device and earphones have been separated by a good 10m or a few walls.
Aftershokz claim the battery life is 8 hours but I found it was consistently a lot better than that. For me they last about 10-15 hours of near-constant use, and generally does me 2-3 days before needing a recharge. They recharge very quickly, needing only 30 mins or so from dead to full.
On battery, there are audio warnings about the battery level when you turn on/off and as you approach empty. They're not super useful though really, I find "High" lasts a couple hours, "Medium" lasts ~5 hours and "Low" lasts 5+ hours. So I hear "Low" and have no idea if that means 'need to charge tonight' low or 'about to shut down' low. Android, Windows and Mac all report the percentage in increments of 10% pretty accurately though so check that instead.
They go easily over your ears and behind the back of your head, holding themselves pretty well but not tightly. They're very flexible so will likely fit most heads okay. They're super easy for me with fairly short hair, but as a friend discovered they're a bit trickier for longer and curlier hair — you may need to put them between your hair and back of your neck.
They're probably the most comfortable personal audio device I've had, as they're very light and just rest on your ears with no pressure. They do compete for space above my ears with my glasses but not enough to be uncomfortable, I just need to make sure they're on the outside otherwise my glasses fall off. I fairly regularly leave them on even if they're turned off as I either forget I'm wearing them or it's more convenient than carrying them.
Buttons are fine, a simple play/pause on the left side makes it super easy to pause if needed and volume+power controls are tucked behind the right ear. The buttons have multiple functions depending on how you press them but all I really use are power on/off, volume, and play/pause.
How good do they sound? Great. A lot of criticism of bone conduction earphones is that they typically don't sound amazing. Bone through skin is harder to vibrate than air I guess!
But these really do sound good. I'm quite into music and have some decent audio kit, talking a few hundred quid of DAC, AMP and speakers at both my desk and living room, some studio-quality headphones and ~£200 wireless earbuds. Even then I can only really critique the quality of the Aeropex when directly comparing them. They're not going to win awards for best sound, but in isolation I find it quite difficult to judge. I've heard many 'traditional' earbuds and headphones from audio companies that sound worse despite costing more.
Getting into more detail, they sound even and well-balanced but lacking some bass. The high frequencies aren't sharp which I love as I'm quite sensitive to high frequencies. Lower frequencies apparently don't conduct as well from the bone to the eardrum which is why they don't come through as well. It's not bad or very noticeable a lot of the time, but for some bass-heavy songs they do lack a bit of oomph.
The way my brain placed sound coming through bone did take some getting used to. Rather than coming from my ears/externally it sounds a bit like the music is actually coming from inside of my head, somewhere between my eyes. It didn't take long for that to be normal however. I suspect this actually helps with my auditory processing issues as it means external noise very obviously comes from a different direction!
The volume controls are just right, the top end is loud but not extremely so. You wouldn't want full volume all the time but are okay for a short period in a loud environment. It can go very quiet if you're sensitive to loud environments and are in almost-silent environments. There's 15 different volume levels so a good amount of control, I tend to keep mine around 7-8 depending on the environment.
The way they sit next to your ears means the pads are sitting right on top of my sideburns. Amusingly that means I do notice a difference in audio quality depending on how unruly my sideburns are! If they have worse contact the quality isn't quite as good so keep that in mind if you have crazy sideburns. My mother would be thrilled to hear I have to keep them better trimmed.
As I mentioned earlier, I use these basically everywhere now. Since I've had people ask 'how do they manage in <circumstance>', here's a summary for you:
Great. I can manage my voice volume and hear my doorbell fine better than open-cup headphones and without speakers. No more complaints from my partner about shouting when on a call!
I love them whilst cooking in particular, can hear my environment perfectly yet still listen to music/podcasts. Have to crank them up when the extraction fan is on.
One area they do not do well. I bust out my old earphones for music whilst I hoover or use loud equipment.
Generally good in all but the busiest and loudest circumstances. Music works fine but sometimes I need to rewind an audiobook/podcasts when next to a very busy road or on a Saturday afternoon in Edinburgh centre.
Ideal! As it's generally a pretty quiet environment they do well, and as my ears are still free I can still hear people, vehicles and animals with no problems. I used to have only one earphone in but now I get stereo music and stereo environment!
They sit securely (even when I shake my head) and are IP67 water-resistant so you don't need to worry about rain or sweat. I don't run but lots of reviews say they're fantastic for running.
Similarly to hill walking, it's very handy being more aware of your environment and people around you. When using them skiing I really appreciate not having to rip out an earbud when about to hop on a lift!
For activities with a helmet the main thing to be concerned about will be the way they go around the back of your head. For me it sits pretty low, hovering around the back of my neck and works fine with my ski helmet, but something to keep in mind if you wear a bigger helmet.
The microphone definitely isn't anything to call home about 😉. I tend to use the mic of the connected device as it tends to be a bit better, but the in-built one does work if needed.
One issue I've had is when used with Windows or MacOS the headphones go into 'headset mode' when a call software uses them for audio output and input. This reduces the output quality significantly and stops the mic from working completely in Google Meet. The easiest solution I've found for this is to use a different mic. In Windows you can set something else as default, in Mac you have to create an 'Aggregate device' using your other mic as otherwise the headset is set to the default every time it connects.
Honestly I'd recommend these to everyone but the most particular of audiophiles. Many people say these are particularly for running or whatever but I think these are perfect for so much more. On some days I am wearing them about 75% of my waking hours and they continue to be comfortable, great to listen to and keeping me aware of my environment.
If I was scoring it I would give it a 9/10, just missing a 10 thanks to the vague battery level warnings and issues with the microphone on calls until I worked that out. Thank you for reading the first and possibly the last of my tech review series!
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Bone conduction earphones revolution — Aftershokz Aeropex review appeared first on alistairshepherd.uk.
]]>Unfortunately, it doesn't have any alternative text on any of the images so is inaccessible. To make this accessible to more people and with Tom Humberstone's blessing, I've written out the alternative text here. I've written it as one list item per panel, please see the original comic for the images.
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Alternative text for "I'm a Luddite (and So Can You!)" appeared first on alistairshepherd.uk.
]]>It's Interop 24 planning time! Let's play a game: Tell me what you prioritize, or don't and why?
The Interop Project is an collaboration across the browser communities to focus on making various web APIs interoperable, standard, and bug-free across the Blink, Webkit and Gecko engines. It's been very productive over the past few years to fix interop bugs and to release new features like container queries and subgrid in a co-ordinated way.
Each year, the Interop Project accepts proposals for what should be included in the project, and once whittled down carefully considers how to prioritise working on these various important or exciting proposals.
So Brian has come up with a game! Ask developers to look through the full list, sort some of them and maybe explain a little bit of why they made the decisions they have. I found this a really interesting exercise and a great way of getting up-to-date with some of the things upcoming to the web platform. Please consider looking at the proposals, voting to what you like the look of and maybe doing your own prioritisation like this!
My order comes from my personal priorities, which are in Accessibility, Performance, and making the kind of websites I work on easier/more fun to build. Those are mostly public, lightweight websites with a bit of creativity. I don't know all the proposals super in-depth but I think enough to make a quick judgement of "oh yeah I want that".
There are over 90 proposals in total and pretty much all of them seem like they would be useful — it makes sorting them tricky! The "game" is to prioritise them to your own order, and if you have your own opinions make your own list!
All of the accessibility-related improvements and fixes hit the top of my list. I work in accessibility, that was always going to happen and I personally feel like stuff that is "nice to have" for developers should come after reducing barriers and making the web more accessible. This one is both nice to have and an accessibility improvement so tops the list.
display: contents
is a very useful CSS property that allows an element to basically remove it's own element box and allow it's child nodes to participate in a higher level. That's particularly handy for wrapping an element in a semantic container—like li
—but allowing children to sit within the grid or flex layout of the parent.
Unfortunately, it's hampered by buggy and unreliable accessibility. In several browsers using display: contents
will remove an element from the accessibility tree and therefore is best to avoid. It's my first priority as it's a great feature that is currently unusable, and an accessibility issue.
Another accessibility one, and this is related to setting display
properties on particular elements. In some circumstances changing the display property of certain elements can remove or break accessibility for those elements. These circumstances are all over the web, and it would be good to ensure people relying on accessibility technology aren't impacted negatively.
The final accessibility proposal in Brian's list, this is standardising and making accessibility roles and names more consistent. It'll make it easier to build complex interfaces in an accessible way, and improve the experience for people using accessibility technology. Sounds great.
View Transitions are absolutely amazing, and I think they're going to a monumental addition to the web. In short, page transitions but also so much more! Particularly for building flashy, creative websites it makes it easier to produce something that feels great without needing to ship a huge amount of JS to do so.
Now this proposal is for Level 1, so only the 'SPA API' that does same-document transitions. Cross-document transitions are in the Level 2 module which is still in draft. That said, there's a lot of amazing things you can do with the SPA API and by getting Level 1 out it'll encourage work on Level 2 and I want that as soon as possible!
JPEG XL looks to be a fantastic upcoming image format with great high-fidelity compression, fast encoding/decoding, backwards compatibility with JPEG, and lots of great image format features that make it look like a great candidate for the canonical image format for most-use cases.
From the perspective of web performance I'm really excited for the potential of JPEG XL.
I've been playing with Scroll Driven animations recently and found them a great API that makes it super easy to implement really cool animations linked to the scroll, with performance that's almost impossible to achieve otherwise and only a handful of lines of CSS.
I've worked on projects with complex scroll animations that are basically impossible to maintain. The mess of timelines in JS are a nightmare, and the CSS API and use of native @keyframe
animations is fantastic in comparison. From a performance perspective it also removes the need to reach for a 100+kB JS library for a simple scroll effect.
Now this one is surprisingly high! Text fragments make it possible to link to certain text from a page and highlight it, using a format like #:~:text=Now%20this%20one%20is%20surprisingly%20high!
. Fairly regularly I want to link someone to a specific part of a page/article, but there are no in-built ID anchor links nearby. With this I can manually construct a URL that links straight to where I want it.
I already use it to link people who I know are using chromium-based browsers, but would love for it to be possible in all browsers.
I feel like issues and bugs with multi-column layout have been a constant through my career in web since Chrome added support in 2016. Every other time I try and use them I give up and instead use a less ideal Grid, Flex or JS-based layout. Or tell the designer "Sorry, the web can't do columns properly" which is completely ridiculous.
It's about time CSS multi-column is sorted so it's reliable enough to use consistently.
Currently in the CSS calc
function division can only be done by unitless numbers. If we were able to divide by value with a unit it would open the way to strip units and to compare the scale of values with different units.
This isn't one I run into often—hence it's position at 9 there's been a handful of times it's come up as something that would make CSS SO much easier. There is a cool but nasty hack using tan(atan2())
but otherwise the workaround are annoying and either involve duplication or JS.
Text box trim allows trimming the space around text, so you can rely on padding and margins to sit flush with the text glyphs. In some designs you want a really neat alignment between a heading and graphic, currently that's tricky without resorting to fiddling with line-heights or "magic numbers".
In a design-led agency, this is definitely something our designers are looking forward to and would make heading design more flexible and easier.
If you need to change multiple properties in CSS at once with something manual, your best bet is adding or removing CSS classes. Which requires server-side or JS logic, and can get a real mess. Style container queries allows modifying CSS properties depending on the value of a single custom property. Basically a custom if statement within CSS.
This is an awesome feature and super handy. Despite that, it's lower down this list for me as it doesn't really solve problems I have with the utility-first CSS methodology I use at work.
This is another one that is super exciting for certain methodologies and ways of writing CSS, but it doesn't really match up with how I do. At work I write utility-first CSS that doesn't really benefit from nesting, and on my own projects I lean hard into BEM-style nesting which needs pre-processed. Ah, how I wish they'd added BEM-style nesting natively but no cigar.
It's extremeley powerful, a great addition to the language, and I know some people super excited for this. Maybe I will be if I have another look at how I structure CSS, but that's why it's not top of the list.
This is particularly around inconsistencies with how clipping text works with background-clip. Clipping backgrounds to text is a super neat feature that can produce some really cool looking effects, particularly combined with images. It can be pretty finicky though so it would be great if it were more consistent.
A way of preventing widows and generally improving readability across lines in paragraphs. Sounds good, and people who care about typography will love it. Definitely handy, I'll use it when it's available, but it's a relatively minor issue to me.
Finally we have text-wrap: balance
, which provides a better layout for short blocks of text considering where to break lines. Just adding it to major headings can easily make them look a little nicer. Like text-wrap: pretty
, it'll be handy and I'll use it but I'm not clamouring for it!
These are all on my radar and of interest, but less of a priority. That may be because I don't know enough, I don't use them, am unsure quite where to place them, or they have less convenient but full-featured alternatives:
The point of the "game" is prioritisation, and I thought it would be interesting to look at what proposals are popular that I don't prioritise. Clearly lots of people want them so it's definitely not a case of them not bing valuable, just not relevant to me for whatever reason!
optgroup
? See Adrian Roselli's Splitting within SelectsIf you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Interop Priority Game 2024 appeared first on alistairshepherd.uk.
]]>Available in full on the HTMHell Advent Calendar
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post HTMHell Advent Calendar — Getting started with Web Performance appeared first on alistairshepherd.uk.
]]>Available in full on the Web Performance Calendar
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post Web Performance Calendar — Ten optimisation tips for an initial web performance audit appeared first on alistairshepherd.uk.
]]>I started a new job at the beginning of 2023, taking on the position of Lead Developer at Series Eight. It's been a really interesting and challenging year, going from being mostly focused on code to instead people and processes. Although I haven't yet achieved everything I set out to, I'm happy with the work I've done and the impact I've had. There's still a lot for me to work on, particularly my stress levels! This year I'm making my goal to be more relaxed and rely on the team more.
Some of the this I'm proud of myself for at work:
I spoke at one conference this year and attended a couple further. I intended to speak at and attend more this year, but focusing on my new job and life got in the way of that.
If you're involved in a conference or meetup for 2024 then please get in touch! I'm updating my current conference talk "Creative Web: Building dynamic websites for work and play" with some exciting new CSS functionality, and also working on a web performance talk called "Making websites fly with web performance for all" for this and next year.
Async Alpine has done well this year and really taken off into something of it's own! There's been a couple of small releases, including contributions from others, inclusion as a dependency for bigger projects, and had almost 20,000 downloads from npm over 2023.
I experimented a bit with "AI" earlier this year and made Wacky Horoscopes, a site that gives silly daily horoscopes that were generated by an LLM. I'm not entirely comfortable with a project using an LLM to be honest, as is extremely obvious from my writing on the "About" page! I really like the concept, find the result funny, and am extremely happy with the frontend and 11ty build, but in hindsight my current stance on LLMs makes me regret it. That said, the horoscopes have been generated now and there's no further LLM involved so it would be a shame to take it down I think? I'm still not sure.
Near the end of the year I wrote Ridge Map, a node library to generate cool visualisations of elevation data in SVG. I thought it would take a couple hours and it ended up taking me 4 days — it was so much harder than I expected! Check out my "Arts and Crafts" section below for more photos of the results.
2023 was a huge year for writing for me, with two articles published somewhere that wasn't my blog (or my work's), and went through an editor who really knew their stuff!
Getting started with Web Performance 🚀 was published by Manuel Matuzovic in the HTMHell Advent Calendar and I am thrilled with it. My goal was something that any web developer of any level can use to familiarise themselves with the motive, concepts, jargon, and tools of web performance, and also include some suggestions of things to check first to get started. I feel like I managed to deliver that pretty well, the reception was great, and I'm also working on turning it into a conference talk. A huge thank you to Manuel and the other reviewers for helping me out with it.
I also wrote Ten optimisation tips for an initial web performance audit for the Web Performance Calendar which slightly blew my mind to be honest. I've followed the calendar for a while and I still can't really believe I am of the standard to have an article published there! This and the HTMHell articles are based on the same initial plan but focused on different audiences. If you want to go in-depth into those suggestions this article has more research, details and references.
I've also written quite a bit this year on my own blog:
And a handful of posts on the Series Eight blog:
I had a few holidays this year, mostly pretty domestic. I went on a group holiday with university friends to the Peak District, England in June, and to Bridlington, England in December (which has a wonderful zoo with ducks in it turns out).
My dad moved from the Isle of Skye down to south Wales so I went for a final easy visit to Skye and visited his new house in Wales. I've only been to north Wales a couple of times so I'm keen to explore it a bit more! When we visited was a heatwave so the temperature, lovely golden beaches, surfing and non-English street signs made it feel like the Mediterranean!
In October I went to Prague with work for a short get-together and socialise. It's a beautiful city and it was fantastic to meet up with the team in person again, even if I don't yet forgive my boss Mario for the amount of Absinthe I drank on the final night.
I'll remember this year for the fact that after about 5 years my design burnout has faded enough for me to finally enjoy designing again! In the summer I started to get into sticker making, and ended up buying myself a Silhouette Cameo 4 cutting machine to more easily produce stickers. I've designed and printed a handful of stickers related to Scotland, development and silly stickers for friends.
It's been good fun to experiment with and make things!
What has really inspired me however is using the cutting machine instead as a pen plotter. You buy an adaptor and some nice pens, and you can have the Cameo draw out the SVG you've designed onto paper. This is right up my alley because it makes art more accessible to me — I can design and draw digital art using code or a graphics program and have the machine put it right onto paper at really nice quality!
Contour prints were where I got started with this but I then wrote Ridge Map in order to generate 3D elevation visualisations to plot. They were my go-to Christmas gift this year, and I even did some of Mars landmarks with watercolour highlights.
I'm keen to keep making things! I'm not sure how 'professionally' I'll do it, on one hand it could be nice to sell art to people who like it online or at local design stores, on the other hand I don't want yet another full-on job! If you do want any of my stickers, prints or want a commission however let me know and I'd love to do so for internet pals!
I didn't play, watch, read or listen to as much this year, instead focusing on blog posts, side projects and arts & crafts. Because of that I'll do short paragraphs about the media I don't have many things to note for, but proper sections for Games and Music.
Getting movies and TV out the way, the only release worth mentioning for me is Barbie — obviously I adored it! I did also go see Rocky Horror Picture Show for a late screening on the day of Edinburgh Pride with free wine and I tell you what that was amazing.
This year I saw more at the Edinburgh fringe than normal, my favourite show was The Tragedy That Befalls the Dastardly Crew of the Kakapo, a perfect example of a Fringe farce with no budget that was side-splittingly hilarious. On live theatre, I also saw Sunshine on Leith at Pitlochry Theatre in November which was fantastic and made me feel extremely patriotic of Scotland, Edinburgh and Leith.
I don't read a lot, but if you're interested in my reading then check out my Literal account (similar to Goodreads). Highlights are The Theory of Everything Else by Dan Schreiber, and Sistersong by Lucy Holland.
On podcasts there was nothing new for me, more Rest is History, No Such Thing as a Fish, The F-Word and The Vanilla JS Podcast.
I didn't play much but there were a few games worth noting! This list is specifically the games I started in 2023, you can instead check out my favourite games ever.
Due to the increased number of calls I'm in at work now, the amount of music I listen to has decreased a fair bit but it's still a lot. Working from home I guess! I listened to a lot of artists and albums from the past few years rather than 2023 releases, so these are artists and albums that I particularly enjoyed in 2023.
If you have any comments or feedback on this article, let me know! I'd love to hear your thoughts, go ahead and send me an email at alistair@accudio.com or contact me on Mastodon.
The post My 2023 round-up appeared first on alistairshepherd.uk.
]]>