Book club: Rust after the honeymoon

Earlier this month Daniel, Lars and myself got together to discuss Bryan Cantrill’s article Rust after the honeymoon. This is an overview of what keeps him enjoying working with Rust after having used it for an extended period of time for low level systems work at Oxide, we were particularly interested to read a perspective from someone who was both very experienced in general and had been working with the language for a while. While I have no experience with Rust both Lars and Daniel have been using it for a while and greatly enjoy it.

One of the first areas we discussed was data bearing enums – these have been very important to Bryan. In keeping with a pattern we all noted these take a construct that’s relatively commonly implemented by hand in C (or skipped as too much effort, as Lars found) and provides direct support in the language for it. For both Daniel and Lars this has been key to their enjoyment of Rust, it makes things that are good practice or common idioms in C and C++ into first class language features which makes them more robust and allows them to fade into the background in a way they can’t when done by hand.

Daniel was also surprised by some omissions, some small such as the ? operator but others much more substantial – the standout one being editions. These aim to address the problems seen with version transitions in other languages like Python, allowing individual parts of a Rust program to adopt potentially incompatible language features while remaining interoperability with older editions of the language rather than requiring the entire program to be upgraded en masse. This helps Rust move forwards with less need to maintain strict source level compatibility, allowing much more rapid evolution and helping deal with any issues that are found. Lars expressed the results of this very clearly, saying that while lots of languages offer a 20%/80% solution which does very well in specific problem domains but has issues for some applications Rust is much more able to move towards a much more general applicability by addressing problems and omissions as they are understood.

This distracted us a bit from the actual content of the article and we had an interesting discussion of the issues with handling OS differences in filenames portably. Rather than mapping filenames onto a standard type within the language and then have to map back out into whatever representation the system actually uses Rust has an explicit type for filenames which must be explicitly converted on those occasions when it’s required, meaning that a lot of file handling never needs to worry about anything except the OS native format and doesn’t run into surprises. This is in keeping with Rust’s general approach to interfacing with things that can’t be represented in its abstractions, rather than hide things it keeps track of where things that might break the assumptions it makes are and requires the programmer to acknowledge and handle them explicitly. Both Lars and Daniel said that this made them feel a lot more confident in the code that they were writing and that they had a good handle on where complexity might lie, Lars noted that Rust is the first languages he’s felt comfortable writing multi threaded code in.

We all agreed that the effect here was more about having idioms which tend to be robust and both encourage writing things well and gives readers tools to help know where particular attention is required – no tooling can avoid problems entirely. This was definitely an interesting discussion for me with my limited familiarity with Rust, hopefully Daniel and Lars also got a lot out of it!

Book club: JSON Web Tokens

This month for our book club Daniel, Lars, Vince and I read Hardcoded secrets, unverified tokens, and other common JWT mistakes which wasn’t quite what we’d thought when it was picked. We had been expecting an analysis of JSON web tokens themselves as several us had been working in the area and had noticed various talk about problems with the standard but instead the article is more a discussion of the use of semgrep to find and fix common issues, using issues with JWT as examples.

We therefore started off with a bit of a discussion of JWT, concluding that the underlying specification was basically fine given the problem to be solved but that as with any security related technology there were plenty of potential pitfalls in implementation and that sadly many of the libraries implementing the specification make it far too easy to make mistakes such as those covered by the article through their interface design and defaults. For example interfaces that allow interchangable use of public keys and shared keys are error prone, as is is making it easy to access unauthenticated data from tokens without clearly flagging that it is unauthenticated. We agreed that the wide range of JWT implementations available and successfully interoperating with each other is a sign that JWT is getting something right in providing a specification that is clear and implementable.

Moving on to semgrep we were all very enthusiastic about the technology, language independent semantic matching with a good set of rules for a range of languages available. Those of us who work on the Linux kernel were familiar with semantic matching and patching as implemented by Coccinelle which has been used quite successfully for years to both avoiding bad patterns in code and making tree wide changes, as demonstrated by the article it is a powerful technique. We were impressed by the multi-language support and approachability of semgrep, with tools like their web editor seeming particularly helpful for people getting started with the tool, especially in conjunction with the wide range of examples available.

This was a good discussion (including the tangential discussions of quality problems we had all faced dealing with software over the years, depressing though those can be) and semgrep was a great tool to learn about, I know I’m going to be using it for some of my projects.

Book Club: Zettlekasten

Recently I was part of a call with Daniel and Lars to discuss Zettelkasten, a system for building up a cross-referenced archive of notes to help with research and study that has been getting a lot of discussion recently, the key thing being the building of links between ideas. Tomas Vik provided an overview of the process that we all found very helpful, and the information vs knowledge picture in Eugene Yan’s blog on the topic (by @gapingvoid) really helped us crystalize the goals. It’s not at all new and as Lars noted has a lot of similarities with a wikis in terms of what it produces but it couples this with an emphasis on the process and constant generation of new entries which Daniel found similar to some of the Getting Things Done recommendations. We all liked the emphasis on constant practice and how that can help build skills around effective note taking, clear writing and building links between ideas.

Both Daniel and Lars already have note taking practicies that they find useful, combinations of journalling and building up collections of notes of learnings over time, and felt that there could be value in integrating aspects of Zettelkasten into these practices so we talked quite a bit about how that could be done. There was a consensus that journalling is useful so the main idea we had was to keep maintaining the journal, using that as an inbox and setting aside time to write entries into a Zettelkasten. This is also a useful way to approach recording things when away from a computer, taking notes and then writing them up later. Daniel suggested that one way to migrate existing notes might be to simply start anew, moving things over from old notes as required and then after a suitably long period (for example a year) review anything that was left and migrate anything that was left.

We were all concerned about the idea of using any of the non-free solutions for something that is intended to be used long term, especially where the database isn’t in an easily understood format. Fortunately there are free software tools like Zettlr which seem to address these concerns well.

This was a really useful discussion, it really helps to bounce ideas off each other and this was certainly an interesting topic to learn about with some good ideas which will hopefully be helpful to us.

Book club: Our Software Dependency Problem

A short while ago Daniel, Lars and I met to discuss Russ Cox’s excellent essay Our Software Dependency Problem. This essay looks at software reuse in general, especially in the context of modern distribution methods like PyPI and NPM which make the whole process much more frictionless than traditional distribution methods used with languages like C. Possibly our biggest conclusion was that the essay is so eminently sensible that we mostly just talked about how much we agreed with it and how comprehensive it was, we particularly admired the clarity with which it explores how to evaluate the quality of free software projects.

Next time we’ll have to pick something more controversial to discuss!

Audio Miniconf 2019 Report

This year’s audio miniconference happened last month in Lyon, sponsored by Intel. Thanks to everyone who attended, this event is all about conversations between people, and to Alexandre Belloni for organizing the conference dinner.

We started off with Curtis Malainey talking us through some UCM extensions that ChromeOS has been using. There was general enthusiasm for the changes that were proposed, discussion mainly revolved around deployment issues with handling new features, especially where topology files are involved. If new controls are added in a topology file update then some system is needed to ensure that UCM files to configure them are updated in sync or work without the update. No clear answers were found, one option would be to combine UCM and topology files but it’s not clear that this is useful in general if the configuration is done separately to the DSP development.

Daniel Baluta then started some discussion of topics related to Sound Open Firmware (slides). The first was issues with loading firmware before the filesystems are ready, we agreed that this can be resolved through the use of the _nowait() APIs. More difficult was resolving how to deal with card initialization. Currently the only complete in-tree users are x86 based so have to deal with the problems with the incomplete firmware descriptions provided by ACPI, there’s nothing standards based like we have for device tree systems, and assumptions about that have crept into how the code works. It’s going to take a bunch of work to implement but we came to a reasonable understanding of how this should work, with the DSP represented as a device in the device tree and bound to the card like any other component.

Continuing on the DSP theme Patrick Lai then lead a discussion of gapless playback with format switches, we agreed that allowing set_params() to be called multiple times on a single stream when the driver could support it was the most sensible approach. The topic of associating controls with PCM streams was also discussed, there are some old APIs for this but so little hardware has implemented them that we agreed that a convention for control names based on the stream names was probably easier to support with current userspace software.

Patrick also lead a discussion of time synchronization for audio streams, both compressed and PCM. A number of systems, especially those dealing with broadcast media, want to bring the local audio clocks as closely in sync with other system and external clocks for both long term streams and A/V sync. As part of doing this there is a desire to embed timestamps into the audio stream for use by DSPs. There was an extensive discussion on how to do this, the two basic options being to add an extended audio format which includes timestamps (in the compressed API) or additional API calls to go along with the data. The in band data is easier to align with the audio but the format modifications will make it much harder to implement in a standard way while the out of band approach is harder to align but easier to standardize. We came to a good understanding of the issues and agreed that it’s probably best to evaluate this in terms of concrete API proposals on the list.

Liam Girdwood then took over and gave an overview of the status of SoundWire. This will be reaching products soon, with a gradual transition so all current interfaces are still in active use for the time being. Immediate plans for development include a lot of work on hardening the framework to deal with corner cases and missing corners of the spec that are identified in implementation, support for applications where the host system is suspended while a DSP and CODEC implement features like hotword detection and support for dynamic routing on the SoundWire bus. Those with SoundWire hardware agreed that this was a good set of priorities.

We then moved on to a discussion of ABI stability and consistency issues with control names, lead by Takashi Iwai. The main focus was around control and stream naming where we have a specification which userspace does use but we currently only verify that it’s being followed manually during kernel development which isn’t great. We decided that a testing tool similar to v4l-complaince which is successfully used by the video4linux community would be the most viable option here, though there were no immediate volunteers to write the tool.

The next topic was virtualization, this was mainly a heads up that there is some discussion going on in OASIS around a VirtIO specification for audio.

Jerome Brunet then talked through his experiences as a new contributor implementing audio support for Amlogic SoCs. Their audio subsystems are relatively simple by modern SoC standards but still more complex than the simple DAIs that ASoC currently supports well, needing elements of DPCM and CODEC-CODEC link support to work with the current system all of which presented an excessively tough learning curve to him. Sadly all the issues he faced were already familiar and we even have some good ideas for improving the situation through moving to a component based model. Morimoto-san (who was sadly unable to attend) has been making great strides in converting all the drivers into component drivers which makes the core changes tractable but we still need someone to take up the core work. Charles Keepax has started some work on this before and says he hopes to find some time soon, with several other people indicating an interest in helping out so hopefully we might see some progress on this in the next year.

The final topic on the agenda was spreading DSP load throughout the system, including onto the sound server running on the host CPU if the dedicated DSP hardware was overloaded. Ideally we’d be able to do this in a transparent fashion, sharing things like calibration coefficients between DSPs. The main sticking point with implementing this is Android systems since Android doesn’t use full alsa-lib and therefore doesn’t share any interfaces above the kernel layer with other systems. It’s likely that something can be implemented for most other systems but it’ll almost certainly need separate Android integration even if plugins can be shared.

We did have some further discussions on a number of topics, including testing, after working through the agenda but sadly minutes weren’t being kept for those so . Thanks again to the attendees, and to Intel for sponsoring.

Linux Audio Miniconference 2019

As in previous years we’re going to have an audio miniconference so we can get together and talk through issues, especially design decisions, face to face. This year’s event will be held on Sunday October 31st in Lyon, France, the day after ELC-E. This will be held at the Lyon Convention Center (the ELC-E venue), generously sponsored by Intel.

As with previous years let’s pull together an agenda through a mailing list discussion – this announcement has been posted to alsa-devel as well, the most convenient thing would be to follow up to it. Of course if we can sort things out more quickly via the mailing list that’s even better!

If you’re planning to attend please fill out the form here.

This event will be covered by the same code of conduct as ELC-E.

Thanks again to Intel for supporting this event.

2018 Linux Audio Miniconference

As in previous years we’re trying to organize an audio miniconference so we can get together and talk through issues, especially design decisons, face to face. This year’s event will be held on Sunday October 21st in Edinburgh, the day before ELC Europe starts there. Cirrus Logic have generously offered to host this in their Edinburgh office:

7B Nightingale Way
Quartermile
Edinburgh
EH3 9EG

As with previous years let’s pull together an agenda through a mailing list discussion on alsa-devel – if you’ve got any topics you’d like to discuss please join the discussion there.

There’s no cost for the miniconference but if you’re planning to attend please sign up using the document here.

Bronica Motor Drive SQ-i

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn’t find any documentation at all about it on the internet and had to figure it out for myself I figured I’d put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I’ll be able to get someone who knows what they’re doing to tell me how the thing really works.

Bottom plate
Bottom plate of drive

The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There’s also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached.

Bottom plate with battery compartment visible
Bottom plate with battery compartment visible

 

On the back of the base of the camera there’s a button with a red LED next to it which illuminates slightly when the button is pressed (it’s visible in low light only). I’m not 100% sure what this is for, I’d have guessed a battery check if the light were easier to see.

Top of drive
Top of drive

 

On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor – depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots.

Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

We show up

It’s really common for pitches to managements within companies about Linux kernel upstreaming to focus on cost savings to vendors from getting their code into the kernel, especially in the embedded space. These benefits are definitely real, especially for vendors trying to address the general market or extend the lifetime of their devices, but they are only part of the story. The other big thing that happens as a result of engaging upstream is that this is a big part of how other upstream developers become aware of what sorts of hardware and use cases there are out there.

From this point of view it’s often the things that are most difficult to get upstream that are the most valuable to talk to upstream about, but of course it’s not quite that simple as a track record of engagement on the simpler drivers and the knowledge and relationships that are built up in that process make having discussions about harder issues a lot easier. There are engineering and cost benefits that come directly from having code upstream but it’s not just that, the more straightforward upstreaming is also an investment in making it easier to work with the community solve the more difficult problems.

Fundamentally Linux is made by and for the people and companies who show up and participate in the upstream community. The more ways people and companies do that the better Linux is likely to meet their needs.