salils blog and writing space

I lost my partner of 8 years.

I have a belief that some objects have a soul. A more substantial soul than most people do. You can almost feel their will to function, and perform well beyond their design capabilities. Every revolution of the engine, or the computer fan seems to indicate an earnest effort to do their best, and work harder. When the going gets tough, they don’t appear to ever give up, but simply carry on, no matter what the conditions are. Sometimes, they break and you get them patched up, but even though they don’t work like they used to before, they still give it everything they have. They’re not just machines, or tools, they take the role of a friend, or a partner. Always by your side, no matter what lies ahead. Their dependability is constant, it almost becomes boring. The end of civilisation seems a more likely possibility.

But then one day, they start breaking down in a manner which is perplexingly undiagnosable and untreatable. But they still function, in a disabled, limp-home mode. It’s painful, and they can’t seem to do anything like they used to before, but they painfully carry on, relying on a combination of button presses to revive them everytime. It feels like they’re dead, and this is panic induced limp-home mode is them in a coma. I guess, it feels like they’re brain dead, with normal bodily functions carrying on, but with no will or direction.

The initial shock was that of betrayal, not with the machine itself, but with the reality that I lived in. It’s a bit cruel that a device that I used almost every single day, was for practical purposes, dead. I assumed it would live on forever. A machine that was 8 years old, nearly four generations past, still worked just as well as the day I first started using it. The work that this machine has done is probably more illustrious than a fair few people that I know of.

Farewell for now, A1278.

Video Game Histories: Burned Sky

I’ve been wanting to write about video games for a while now, and so perhaps there’s no better place to start than from the very beginning of my video game history. I didn’t get to play very many video games when I was young, however there were a few standout games that I still remember to this day, vividly even. One of these games is called Burned Sky. As you may have caught on by now already, my online moniker is in part inspired by this game, and in part due to a complete lack of creativity.

Burned Sky is a very special game for me. A very good friend and I used to play it for hours on end in his bedroom, driven on by the sugar rush of Coca Cola and Limca. It was a simple game, and perhaps it was the simiplicity that we enjoyed. I distinctly remember it being played off from a CD-Disk with a distinct yellow cover. The game revolves around a rebel force of which you’re a pilot, and posses a very powerful fighter plane. Your goal is to use your plane to defeat the incumbent powers by destroying their various watchtowers, missile launchers, and fighter planes. The controls were just as primitive as the story, Up-Down-Left-Right to control the fighter, and Control-Z-X-C to fire the various weapons.

Just as much as I had enjoyed playing it, I had almost immedietly forgotten it, and I carried on with my life. In 2017 I started to enter the world of immersive media and game development, and right then, the thought of remaking Burned Sky struck me. It stayed latent for a while, till a few weeks ago, when I created a rubbish prototype of the game using GDevelop. Right after making it, I grew even more curious about Burned Sky. I kept thinking about who the developers where, where they were from, and why did they make the game in the first place?

I started investigating, and tried to figure out the answers to the lingering questions in my mind. This wasn’t as easy as you might think it is. The name ‘Burned Sky’ doesn’t provide for many relevant search results, but soon, I stumbled upon a page which promised the game as a giveaway.

vgh-bs1.png

Looking at this website, my best guess is that this giveaway is dated sometime in 2007, although there’s a good chance it could’ve been posted even before that. I used the paragraph as a search term and stumbled upon multiple blogs, and websites which had simply relisted the same content-so there was nothing new.

vgh-bs2.png vgh-bs3.png

The game is listed on alivegames.com, and so I used waybackmachine to look the history of the page down the years. This was interesting, since wayback machine showed the page was practically unchanged since October 2002. The wording may have changed here and there, but the content was the same.

vgh-bs4.png vgh-bs5.png

So no new clues there. I wrote to Alivegames through their Facebook page, but received no response. So I decided to just play the game and find clues within the game itself. This is a little tricker than you might think. Burned Sky is a Windows game designed to run on Windows XP, and I have a newer (2012 era) Mac. Like a square block in a round peg, this wasn’t going to work.

I downloaded the only version of Burned Sky from the Alivegames website, and got Wine to install it and play it. Although its a little buggy and tempremental, the game works and I could play through all the levels. Although if I get too many elements on screen, it does tend to crash. On the ‘Credits’ section of the main page, I saw 3 names listed unders different roles.

vgh-bs6.png vgh-bs7.png

So I went about trying to track these three people down, and this was not as easy as I thought it would be. This game was made in ~2002, and so the folks who created it may not have created publicly accessible online internet profiles. Luckily there was a match for Alexander Kontsevoy, and I found his Behance page. Alexander created the artwork, and so finding a Behance page in his name was a good breakthrough. I messaged him via Behance but got no response. I’m not surprised though, since, well it’s Behance. With a little more digging around I found a Telegram number associated with this company. In my very limited experience I do know that Central Asian countries and some parts of Russia prefer using Telegram, and so I messaged on the number asking for Alexander’s contact. Within a day, I got a reply, and got the contact email that I was looking for. The person messaging me also told me that Alexander worked as a game designer a few years ago, which was really nice to hear as I was still unsure if it was the same person.

I wrote to Alexander and I got a very positive reply within a day, and he really was the same person who created the artwork for the game! It felt great to make some good headway with my investigation and research.

I asked him some questions regarding the development of the game, and I’ll paste them below verbatim to put them in context.

Alexander: Burned Sky was my very first project as a game graphics designer. Me and my fellow students were working on it for a couple of weeks when we studied at university, entitled our small partnership as ‘Digital Fantasy’ and tried to self-produce the game through some websites. As I remember, it was released in 2002, eighteen years ago. Anyway, it had almost zero financial profit (frankly, the game is kind of trashy). So unfortunately, there was no “full version”, the game you played is the only one that was released.

Salil: I wanted to understand the context behind Burned Sky, and how it came together. What was the planning like and how did you figure out the final story and design of the game?

Alexander: As it was our first project, the planning was really simple: we decided to make a game that we could handle with our limited skills. So it should not take too long to develop (short term is really important, because indie developers loose motivation quickly), be technically simple and have some sort of automation in terms of gameplay building. I don’t quite remember who came up with the idea of 2D shooter, but our programmer Sergey said that he can handle coding 2D sprite engine with 1-bit transparency (not a commonly used 8-bit alpha-channel).

So we agreed that it should be a game with aircraft that flies around and shoots, and I got to the graphics straight away. There was no story behind it at first, I just created 3D models and textures and we tried to make it work and look nice. The story actually came later, when I assembled levels and pre-level screens. We needed to add some explanation on the mission objective, so I came up with short text intros that finally became some kind of a plot. Noone actually took this seriously, so the story is almost non-existent and shallow. I guess that “Rebels of Army of Liberation” was added to the game description at publishing, there are no such details in the game at all.

Salil: I found a version of the game in 2002 via waybackmachine on alivegames.com, see screenshot below. Were there any other versions of the game, or updates released after the first version?

Alexander: Alive Games is a publisher company – two local guys that tried to make some money on selling indie games, and we gave them Burned Sky for publishing.

As I mentioned earlier, there were no other versions. I think that the game was simply stripped down and provided access to all levels after you buy the ‘full version’, but you still can see all pre-level screens in the graphics folder (they are not even compressed), so you can see what is the actual game size is (it was 10 levels in total, if I remember right). I’m also not quite sure how the game was protected, but I think that you simply got the missing level data files when bought a ‘full version’.

Our director Eugene also released Burned Sky Shooter a bit later – a mini-game with a similar name. It had nothing in common with Burned Sky, except for the graphic sprites, and actually was a Space Invaders-like shooting action that ran directly over your desktop.

Salil: How long did it take to make the game, and did you use a game engine to make the game?

Alexander: Everything took two or three weeks. We did not use any ready engine, Sergey wrote something himself. I believe the game used DirectX Windows API, and bitmap sprites for aircrafts positions (different rotation angles), and uses a key color to indicate transparency. So the engine does not even rotate the sprites, just draws them. Really primitive.

To assemble levels, we created a small application that allowed to open/create level file, specify level size (like 64x64 tiles), draw with different tiles on the map and save.

All coding was done in Delphi/Pascal. Not a regular choice for gamedev, but Delphi was popular those days for ease of entry and rapid development.

Salil: Were there any other ideas for the game? Do you still have the art for this game, or any other rejected ideas/concepts?

Alexander: There were no sketches, tryouts, on unaccepted versions. I simply created models in 3DSMax using my experience as a 3D-modelling hobbyist, rendered them and tried to make them as good as I could at the moment. Others had even less expertise in graphics, so there was no criticism and everything that was done got right into the game. The same is for the programming part: Sergey was working on the engine and others did not affect his work much. We all wanted to create something finished, and did not care much about polishing and perfection – we simply did not have enough skills to do anything significantly better.

The only thing that was tweaked a bit is gameplay – the aircraft acceleration, max speed, set of weapons, how they work and how much power they have. We wanted to achieve a playable result that was fun, but still challenging. Not sure that we have dialed it perfectly :)

As for the art, I think it is gone with my old computer. I did not back up anything then (now I do), and lost a lot of my early 3D artwork and other stuff. That is also the reason I cannot recall the dates perfectly: I’m used to rely on records and file creation dates, but do not have them now.

Salil: What are the other game projects you were later involved with? Any exciting ones you’d like to share?

Alexander: The next project with Digital Fantasy was Treasure Hunter: Treasure Hunter - Free download and software reviews - CNET Download.com

Treasure Hunter is also kind of trashy and not really playable (insanely hard!), but it’s in 3D. I also have an archive with the full version. It still can be run, but works weirdly on modern graphics cards.

Then in 2006 I got a job in a gamedev studio, Funlime Games, and created several fun games with them:

  • Zoo Volley – a fun volleyball game with animals for smartphones (Flash version is available at some websites)

  • Snow Patrol – a snowballs-throwing casual action game

  • Mythic Marbles – a nice marbles board game published by PlayFirst

  • Fashion Dash – this was one of the famous Dash series published by PlayFirst

  • Sacranta – not sure if it was released (I quit Funlime Games at the end of development), but I have an almost complete playable version.

Salil: Whatever happened to the other team members? I can’t seem to find them online.

Alexander: The guys are fine :) As far as I know, Sergey and Eugene did not work on any games after that, except for Treasure Hunter (it was done by me and Eugene) and Burned Sky Shooter (Eugene’s mini-project). They went into business applications development and management.

So among us three, I had the most experience in gamedev (still pretty humble though), but finally quit Funlime games in 2007 and switched to web and mobile UI design.

You can download the game here: http://www.alivegames.com/burned_sky/

It was great to finally understand the context and speak with one of the developers behind on my favourite games. Next up: Skies of War.

The Mac is dead. Long live the Mac.

arm-mac.png

So this is it. The end of an era. Intel has seemingly confirmed that they will lose Apple as a customer in the next couple of years, possibly from 2020 onwards. This is a very exciting time for the Mac. Just like the switch from PowerPC to Intel, this is a major change which will herald a massive change in the world of personal computers, and could possibly redefine the very definition of a personal computer.

Let’s unpack this. But first, a bit of history.

In 2005, Apple announced that they would be switching over from using IBM built PowerPC processors to Intel’s x86 processors. Although PowerPC processors were very fast, they were quite power hungry and ran quite hot–undesirable qualities for use in a laptop. There were also issues with IBM being able to manufacture the chips on time. Steve Jobs decided enough was enough and announced the decision to switch over to Intel chips. This was easier said than done, as this wasn’t a case of simply switching out the processors and hitting the restart button.

To run Mac OS X on the Intel chips would require a complete rewrite of the operating system, as it would compile differently than on PowerPC chips. And as if changing the operating system wasn’t itself a mountain to climb, every single application or piece of software written for the Mac would also need to re-created to be able to run on the new Intel processors. However, with some clever software writing to help developers, Apple managed to completely switch over to Intel chips within 2 years. A piece of software called Rosetta, allowed developers to run their PowerPC compatible software on Intel chips. Although this wasn’t an ideal solution and some applications such as the Adobe Suite used by a large majority of Apple customers, needed to be completely rewritten in order to function on the new processors.

As tumultuous and inconvenient those 2 years may have been, it was definitely worth it. Apple was back on top, making some of the most powerful machines their customers could buy. Intel kept on making bigger advances year on year, and the customers lined up to get their hands on a Mac. All was well. Apple used AMD or Nvidia graphics cards in their higher end specifications, and it’s pro users were very happy. Fast forward to 2012, and the first noises of discontent could be heard. The Mac Pro wasn’t as fast as it should have been, and Intel’s turn of speed in both processor speed and manufacturing capability started to slow down. The next generation Retina display MacBook Pro lineup was announced in late 2012, and thereafter things started to go wrong. Apple was unable to update the 2012 model’s adequately, and it took them until 2016 to launch the next generation model. Now ordinarily 4 years isn’t that long a time, but with Intel’s ever smaller rate of improvement, combined with Apple’s obsession with creating thin and light computers was starting to affect the upgrade schedules of their pro users.

Around this point of time, applications were starting to rely on parallel compute hardware, which dramatically accelerated their speed and efficiency in various tasks. Graphics processing units or GPU’s were now becoming more and more popular as productivity software for creatives started to rely on them more heavily than ever. This is an important factor, as you shall see later on.

Imagine this scenario. It’s early 2016. You’re a pro user, with the latest 15-inch MacBook Pro. You’ve got an Intel quad-core processor, which is a generation older than the competition, and an ageing Nvidia GPU, which barely gets the job done with the demands of newer pro software. It’s been a few years, and you’ve held off buying a new laptop as you know Apple is launching the next generation MacBook Pro soon. The leaks promise a super thin and light laptop which is expected to pack the latest and greatest internal hardware. You’re not happy as your work is getting affected, but no matter. The new Mac will be here soon, and all will be well. It’s now October 27th, and Apple has finally revealed the new MacBook Pro, and now you’re very angry indeed. The new laptops look fantastic, and come in Space Grey! But they are woefully inadequate at launch, barely faster than your current machine, and have absolutely no ports at all! Crucially they also don’t have a pro-level GPU. What now?

Apple had been concentrating on their mobile devices, namely the iPhone, and had clearly taken their eye off the ball, quite literally. There were reports of hardware and software teams being told to prioritise development for the iOS and iPhone. So say the rumours, but long story short, you were essentially getting screwed over by Apple. Betrayed even, one might say. To add to the misery, the laptops suffered battery issues, display issues, speaker blowouts, malfunctioning ports, and had chronically flawed keyboards which could turn a MacBook into a very expensive paperweight. Apple appeared to have not only shot themselves in the foot but had started painfully hacking away at every part of their legs with a machete. It was brutal to see this unfold.

In 2018, Apple updated the MacBook Pro’s with significant upgrades, but still suffered from the chronic keyboard issues and now, significant thermal throttling. The GPU’s were still a sore point, and in 2018, did not officially support Virtual Reality, which is borderline criminal. Was this tragic or funny? Both perhaps.

The current situation for a MacOS pro customer is not great at the moment. Short of spending a preposterous amount of money, the current lineup is average at best, and hopelessly underpowered in comparison to it’s Windows rivals at it’s worst. But, history often repeats itself, and I believe we are currently at the end of an era, and at the cusp of a new one. This change of guard is perhaps the most significant in the history of the Mac and looks set to propel the Mac back to where it belongs, right at the top.

I’ll have to rewind again and show you frame by frame in slow motion, how this change will come to pass. It all starts with the inception of the iPhone, back in 2007. Apple wanted to make a small and powerful phone, and at the time was looking at Intel to design and manufacture the processor. But Intel didn’t see the value of investing in the design and production of a low power chip, and wanted to focus on creating products for laptops and desktops. Apple then worked with ARM, a small British company for the design of the main CPU, which was manufactured by Samsung. Intel did not see the smartphones or mobile devices as the future, and ultimately it missed the boat. Long story short, ARM has a large lead on the design of mobile chips, and it’s licensing model allows customers to choose their own manufacturers which is a critical factor as it allowed Apple to negotiate a better deal and save money on manufacturing costs, instead of being held ransom by single supplied, for example, Intel. In 2019, chances are that the device you’re reading this on, or a device you own, is probably powered by an ARM-designed chip. Both Qualcomm and Apple use ARM’s architecture in the design of their mobile chips.

These chips with ARM cores are excellent for use in mobile devices with their low power draw, and minimal thermal output. It’s not only mobile devices which use them but now servers are also starting to use them for low power instances in a bid to cut electricity costs. ARM and Intel employ different instructions sets in their processors. ARM uses Reduced Instruction Set Computing (RISC) and Intel’s x86 processors use Complex Instruction Set Computing (CISC). Although this largely doesn’t matter as the electronic devices you might use, usually contain a mix of both sets of instructions. Both of them have their advantages and disadvantages for certain tasks, hence they will be found being used together.

Intel designs and manufactures their own chips, which gives them greater control over their design and production, hence their ability to not only innovate across the board and create better products but also the privilege to charge more and hence, lead to increased profits. ARM, however, has a different business model. Advanced RISC Machine, ARM for short, designs the chip architecture, which they then license to other companies (such as Qualcomm), who will manufacture them and sell them to other companies (such as Samsung, Apple), who will use them in their device. This is usually the CPU, GPU or both. Other integral parts of a computer, such as the memory, connectors, radios, amongst others are designed and manufactured by other companies. Although Intel designs and manufactures CPU’s and GPU’s, the final product in the consumer’s hands are an amalgamation of parts from various manufacturers.

Mobile devices use a highly compressed form of this amalgamation called System on Chips. Abbreviated to SoC, this particular thumb-sized bit of silicon contains all the major parts of a computer, such as the CPU, GPU, memory, antennas, and so on. So Apple or Qualcomm will buy parts from various suppliers, or create their own, to eventually make an SoC. They will then sell these SoCs to other manufacturers, such as Samsung, or LG, or Google for them to use in their devices.

As the rapid decline of the Mac unfolded, Apple was making huge strides with their mobile devices. As of 2019, the iPhone and iPad are some of the most powerful mobile devices you can buy, with great battery life to boot. Their A-series chipsets which power their devices are class-leading, and competitors are struggling to keep with the year-on-year improvements achieved by Apple. This is not purely down to the physical hardware itself, as the software running on those chips have been exceptionally well optimised to ensure efficient use of resources. Since 2011, their chips have gotten up to 36 times faster in multi-core tasks and 17 times faster in single core tasks. In comparison to its competitors, Apple has been focussing on ensuring that the hardware they design is used efficiently by the software they write. Navigating through marketing and PR fluff is quite a difficult task for the average consumer, as certain metrics advertised by manufacturers have next to no impact on the quality of their experience of the device they are trying to sell. Case in point, Qualcomm advertised the higher number of cores their SoC’s had in comparison to Apple, but this didn’t matter as there wasn’t much software written for Android devices (which used Qualcomm’s products) to take advantage of those extra cores. Extra cores are usually attributed to better multi-tasking performance and better battery life and more.

Now diverging back to the Mac, we can now see a situation where Apple has the ability to create very powerful chips with ARM architecture, with exceptionally powerful GPU’s, the likes of which can traditional gaming consoles. The caveat here is that the software has to be well optimised to ensure that these statements hold true. The potential for Apple to use ARM-based chips for the Mac is ripe, and we will probably see the first implementation in the model which will succeed the 12” MacBook. Subsequently, the entire product lineup will eventually switch over to the new ARM-based chips, with the ‘Pro’ lineup’ last to make the change. Intel will keep pushing the boundaries of what is possible, and there is still time left to convince Apple to retain their chips for the more powerful ‘Pro’ hardware.

It is a little tricky to directly compare Apple’s ARM-based A-series chips versus Intel’s latest and greatest. Geekbench and SPEC2006 (synthetic benchmarking frameworks) results, amongst others, can be referenced, but the truth is altogether more complex. There isn’t yet a benchmark which can conclusively prove if the A-series chips are faster or not. It’s important to consider the fact that software optimisation is critical, hence inadequately optimised software can dilute the results.

But taking the results as an aggregate, and applying some healthy speculation, we can probably make an assumption that the A12X Bionic, Apple’s most powerful chip is about 30-40% slower than Intel’s desktop-class i7 processors[i7 8700K]. Taking into account the A12X Bionic’s lack of active cooling (which is to say, it doesn’t have fans or any cooling mechanism), and dramatically lower power consumption, the gap doesn’t seem so large after all. It’s fair to say that the A12X may not be able to sustain the same performance over extended periods of time, but with adequate cooling, this limitation can also be overcome. The A13X could help paint a better picture of what is to come next, and whether Apple can sustain the rapid year-on-year performance leaps which could enable them to confidently use ARM-based chips in the MacBook. Another potential upshot could be a dramatic increase in GPU performance. We could see performance equivalent to the GTX 1070 or more without the need for a discrete GPU. Perhaps we could see a combination at work here, with the integrated GPU working an AMD supplied discrete GPU. It is possible that Apple can solely rely on their own GPUs by 2025, depending on the performance increases they can make.

So how will the switch affect you? Well, the first thing you’d be worried about is the transition pains. Will your applications continue working? Apple’s XNU Kernel for macOS is compatible with x86 and ARM64, so it should be painless. There’s also Marzipan for translating apps from iOS to MacOS, so it’s likely that Apple will have a similar solution for native binaries to be converted to ARM - like back in the day when they had Rosetta. We might see one or two generations of MacOS which will still support older x86 apps, working through a translation layer or emulation, but eventually, Apple will kill support off, and become ARM only. Usually, it takes them two OS releases for this to happen, so 2020-22 with support for both x86 and ARM, and after 2022 probably ARM only.

We will not only potentially see a healthy increase in battery life, but also a solid improvement in connectivity options. Potentially powerful WiFi antennas, even eSim capability perhaps. More realistically, additional hardware in the real of Machine Learning, and security. A good example is the T2 chips that ship with the latest MacBook Pros. The T2 chip not only handles encryption and security but also video encoding, specifically the new H.265 standard.

There will still be some pain in making the switch, especially if Intel or AMD can find some huge gains in the next couple of years. Apple has always been exceptional in extracting performance from sub-par hardware, so at this point, it is difficult to imagine how the transition could play out.

I honestly can’t really figure out how this will play out, and what to look out for. But there is no smoke without some fire, so it’s fair to say there is something big coming up on the horizon, and it should be an exciting, and perhaps actually revolutionary.

Huge thanks to Coreteks for contributing to this piece, do subscribe to him on Youtube, and watch his latest video on future of Intel and AMD.

Reference https://mashable.com/2016/06/29/intel-macs-at-10/#dqeByJREpkqp

https://appleinsider.com/articles/18/12/06/qualcomms-snapdragon-855-is-over-a-year-behind-apples-a12-bionic-lacks-a-premium-android-audience

https://reveried.com/article/arm-processors-nearing-performance-parity-with-x86

https://www.slashleaks.com/l/apple-designed-arm-mac-processor-benchmarks

https://www.youtube.com/watch?v=IfHG7bj-CEI&frags=pl%2Cwn

Data of a Third Kind

Over the last few years, I’ve been trying to play with data, and extrapolate meaning from datasets. It’s all very functional, and the more complex the dataset, the more fun it’s been to find correlations and overlaps. It does get dull at times, and very frustrating when nothing seems to work out. There are many tools with which data can be handled with, be it the ever functional spreadsheet, the notepad file, or even the back of a tissue paper. My weapon of choice is an IDE called Processing. An IDE, or Integrated Development Environment is a platform that allows for the compilation of code, amongst other functions. Created by Ben Fry and Casey Raes, Processing uses an abstracted version of Java, which makes coding a lot more accessible to visual artists who have no prior experience with programming. What makes Processing different from other tools for working with data is that it can not only work with a lot more data, but also different kinds of data.

Sound is a very strange medium. I’ve always thought of sound as a very powerful tool, which can affect my emotional experiences deeply. We hear sound when our eardrums “feel” the vibrations of the air around us, and perceive that to be sound. A combination of sounds forms a musical composition. Many tend to hear music, or feel it. But I’ve started to see music in a different perspective. To show you what I mean, this is what music looks to me nowadays:

ff-image.pngFFT Analysis of Moonlight by Alex Lustig

The numbers are a result of an FFT analysis of the sound. FFT, also known as Fast Fourier transform, breaks down the sound into its individual frequencies, which is what you see above. These numbers are the components of the music you hear, defining each vibration that enters your ear. The thought of music, which is so powerful and often indescribable, being broken down into numbers is so exciting and fascinating. Quantifying something which can’t be easily described.

Now that we have these values, what do we do with them? Using Processing, I started experimenting with printing music, and making sound visible. These values are the bridge between hearing something and visualising it. The frequencies described in this dataset are exactly what you hear, so it stands to reason that these values might also be equally useful when trying to “see” sounds. So I started printing these values out, in simple shapes and forms to see what sounds looks like.

waves.jpeg radial.png frequency.png

It is fascinating no doubt, but it seemed to lack something important. These visuals were too static. Sound is an active phenomenon, always changing, full of energy and life. To inject that spark of life into the visualisations, I switched tack and started work on visualising sound, rather than printing static images.

I could finally see progress now. There’s still a lot of work to be done, but this was a step in the right direction. The energy of the sound I was hearing was finally starting to make itself seen. The visuals looked like they had life in them.

In every experiment, most of my efforts went into massaging the values, working with mathematical functions and logic gates to induce small amounts of intelligence into the code to allow sound to control the visuals. I didn’t want to get in the way of the sound producing visuals. I simply gave it some rules and restrictions to follow, and let the sound dictate everything else. It is amazing to see how mathematics and logic play such a big role in understanding and creating sound and visuals, the mainstays of traditional art.

To push myself, I took up the 36 Days of Type challenge to see how I could creatively visualise the data derived from sound everyday. The goal was to use the data differently for every letter, and start using increasing levels of intelligence, by giving the sound more autonomy in creating visuals.

Music : T-Rex-Genuine Color

Music by Eh-Fhin

I’ve just started to scratch the surface on the possibilities here, and I still have to start analysing sound a lot more deeply to understand the smaller nuances. But the process of doing is truly very enjoyable, and it is fascinating to see how minor changes in frequency can dramatically alter the impact of sounds.

By affecting the manner in which the values change, the visuals can be dramatically altered. The more intricately the values are calculated and managed, the more detailed the visuals can get. It’s a bit like mathematical surgery, fine-tuning the values and mathematical functions to get the desired results.

Whoever said mathematics isn’t fun?

Music : Neptune by Donny Hills

Music : NUDE by Wolves(ft. Kurus)

Toolkits and Dissemination

toolkits.jpeg

The trickiest, but arguably the funnest bit of the project. Yes, funnest. Initially I had started off this project envisioning a toolkit as the end result. But what exactly is a toolkit? More accurately, a method toolkit is a collection of exercises and activities designed to propogate knowledge and scale impact of the same. Research and experiments conducted in one place are usually packaged into a toolkit in an attempt to replicate impact at scale.

The aim is not to teach a man how to fish. The aim is to teach a man to teach himself how to fish. Toolkits are supposed to let the on-ground team replicate the methods used in the successful pilot. Although guided workshops are hosted to teach the process of using the toolkit, the onus is on the toolkit itself to be accessible enough to disseminate the information. Having read the wonderful essays by Akshay and Quicksand on toolkits, I’ve started to reconsider my approach toward a toolkit.

My mistrust of toolkits started long before the project started. I would never use a toolkit unless I was mandated to, or in a workshop setting. The list of things I detest in toolkits is long, so I’ll focus on the big ones.

The sheer inaccessibility of toolkits is the most frustrating. The language, the way the tools are structured and the tone of the content makes it seem cold, aloof, and almost academic. Then the lack of context is borderline criminal. Sure, it broadly works in a few situations, but there will always be exceptions and special cases.

The merit of any toolkit is judged on how often is it used. If no one feels like using the tools then it has failed, no matter how technically resolved it may be. There is a caveat to add here: toolkits are always aimed at a very particular target audience, people who are in specific position at a specific time, doing a specific activity.

I could be completely wrong about this, and statistics will probably prove so, there being enough numbers to prove the success of toolkits. But for me, personally, they simply don’t work and there are better ways of disseminating information. It’s also important to recognise that the method of dissemination should entirely depend on the target user, and not on the nature of the knowledge itself. Of course, in an ideal world every specific problem would have its own bespoke solution, but since that not going to happen anytime soon, we have to do the best we can with general solutions.

In my poorly informed, highly inexperienced opinion, I feel that toolkits shouldn’t teach a fisherman how to fish, because unless you cover every single type of fishing technique, it’s not going to be a very helpful guide to those who aren’t in the same situation as described in the toolkit. What they should do is, to teach the principles and concepts of fishing. So that you can learn and develop your own unique fishing technique. So, teach a fisherman to teach himself how to fish. Everyone is creative in their own way, and will figure out solutions to their own particular problems when taught the basic principles and concepts.

It’s a lot like mathematics. Once you learn the basic formulas, and learn how to derive them, you can then apply it to any problem. Another lesson I learnt from mathematics is that you have to understand the problem very well. Once you understand what the situation needs and requires, you can then use the necessary formula. Of course, real life is nothing like mathematics, there exist many complexities that no toolkit can handle.

I really like reading case studies. HBR is a magazine that is full of them and I devour it the moment I can get my hands on a copy. On the face of it, case studies are just another layer of abstraction between the reader and the information. But that layer of abstraction gives the information context and tells the reader how it was relevant only in that context. So one can separate the context from the story with the additional understanding of knowing how it was used in context, hence giving a better understanding of the information. So I tend to understand concepts a whole lot better when presented to me in case studies. The power of narratives to explain a concept should never be underestimated.

The other alternative is that we look at the other end of the spectrum by making a checklist of sorts, which the user can use as reference to get through the process. This would leave the entire process in the creative control of the said user, and would merely direct them in a particular order of things to do. This is suitable for a limited number of tasks/processes and would compromise on the quality of the process.

At the other extreme, is to individually train people for the task that they’re trying to tackle and give them the knowledge and the ability to execute that know-how. This, of course is highly resource intensive but will guarantee results. Also, highly inefficient when trying to disseminate knowledge to scale up impact. The ideal solution lies somewhere at the multi-cross-section of these methods, and will change according to the broader context.

I’m still trying to figure out the differences between lived experiences and reading about lived experiences. There is a light years of a difference in the impact it has on a person, that goes without saying. But in the context of learning from the said experiences, does it really make that much of a difference? Is learning from reading about mistakes just as effective as actually making those mistakes? Can I learn that its not healthy to touch boiling water from reading about it, just as much as I can by actually touching boiling water once and facing the consequences? As a rational, thoughtful individual, I’d like to think so. There are many strong arguments in favour of actually making the mistakes to learn from them once and for all. But perhaps, in a situation to educate about the dangers of a certain activity, just reading about a certain incident should be impactful enough.