Friday, April 22, 2016

A textbox that auto-wraps to fit contents

The Windows Forms Textbox control is a thin wrapper for the underlying Win32 control and there is no Paint event you can hang extra code on in the normal way. If you write something like mycontrol.Paint += myFunction; and mycontrol is a Textbox them myFunction() will never get called.

You can go the whole hog and

SetStyle(ControlStyles.UserPaint, true);

and provide a Paint method but you have to do everything as it is no good calling the base.OnPaint(). Give it a try, and you will end up with an astonishingly unresponsive textbox.

I wanted to create a version of a Textbox that would resize itself vertically to accommodate extended word wrapped text. That did not result in any issues – all it needed was a FitToContents() method that retained the set width and allowed the height to vary to fit the control content.

protected virtual void FitToContents() {     Size size = this.GetPreferredSize(new Size(this.Width, 0));     if (multiLine) { this.Height = size.Height; } }

However I also wanted to be able to switch between a label displaying a given string and a textbox capable of editing that string. Just to make that interesting I wanted a “fade in” and “fade out” animation to switch between the two. While I was at it, I also wanted to support a textbox “Placeholder” facility to keep the UI nice and clean. It was that final element that reminded me that it was possible to slide a limited paint facility into position to be used in place of the controls own paint when the control was relatively inactive. My own paint facility only needed to manage the fades and the placeholder text display – all the rest could be left to the underlying control.

Fading text in and out was just a matter of tweaking the Alpha component of the control ForeColor in steps over a defined time period. It was a good opportunity to use the recently introduced async/await functionality although a timer would have worked just fine. The control border is unreachable to all intents and purposes although if the control was set borderless one could be drawn around it and arrangements made to fade that rectangle in and out.

OK – I know - WPF and all that, but I also have this funny feeling that as soon as I invest any real time in WPF code Microsoft are going to announce a newer superer duperer common base for Windows apps and it will all be to no avail. Don’t take advice from me though – anything could happen.

The full code for this class can be found at the bottom of this post – usual caveats.

In case anyone fancies just using the placeholder functionality I have generously added an attribute (AutoMultiLine) that can be set false to stop the control re-sizing in response to text longer than the control width. Just set the PlaceHolderText attribute at design or run time. You can always ignore or remove the code associated with the fade.

The control based upon Label with a matching fade facility is very similar but simpler and again the code can be found below. This label also supports automatic height resizing to match the Text.

Please note that these controls are using a language feature from .NET v4.5 so you would need to switch to using a timer to make use of the fade functionality with earlier versions.

Combining these two custom controls into a UserControl made sense as they could then be addressed together as a single entity. This did have its challenges – partly because the TextBox BackColor property can’t be set Transparent and that makes the fade through a bit clunky in one direction – perhaps I should have gone for a “wipe” effect instead…

namespace Adit.Classes.Controls {     [ToolboxBitmap(typeof(TextBox))]     class MultiLineTextBox : TextBox     {         #region Private Declarations         private Color foreColour;         private Color placeholderColour = Color.Gray;         private int opacity = 256;         private int fadeSteps = 10;         private int fadeTime = 750;         private string placeholderText = "Type here";         private bool showPlaceholderText = false, multiLine = true;         #endregion         public MultiLineTextBox()         {             this.Multiline = multiLine;             this.WordWrap = multiLine;         }         #region Design attributes         [Description("Fade time in milliseconds"), Category("Behavior")]         public int FadeTime         {             get { return fadeTime; }             set { fadeTime = value; }         }         [Description("Number of steps from transparent 0 to opaque 256"), Category("Behavior")]         public int FadeSteps         {             get { return fadeSteps; }             set { fadeSteps = value; }         }         [Description("Placeholder Text"), Category("Appearance")]         public string PlaceHolderText         {             get { return placeholderText; }             set { placeholderText = value; Invalidate(); }         }         [Description("Placeholder Colour"), Category("Appearance")]         public Color PlaceHolderColour         {             get { return placeholderColour; }             set { placeholderColour = value; }         }         [Description("Automatic switch to multiline"), Category("Appearance")]         public bool AutoMultiLine         {             get { return multiLine; }             set {                 multiLine = value;                 Multiline = value;                 WordWrap = value;             }         }         #endregion         #region Public methods         public async void FadeInOut()         {             bool saveUserPaint = GetStyle(ControlStyles.UserPaint);             foreColour = ForeColor;             int opStep = 256 / fadeSteps;             if (Visible)             {                 opacity = 256;                 opStep *= -1;             } else             {                 Visible = true;                 opacity = 0;             }             SetStyle(ControlStyles.UserPaint, true); // set after any Visibility change             opacity = opacity + opStep;             while(opacity > 0 && opacity < 256)             {                 foreColour = fadeColour(opacity, foreColour);                 placeholderColour = fadeColour(opacity, placeholderColour);                 Invalidate();                 await Task.Delay(fadeTime / fadeSteps);                 opacity = opacity + opStep;             }             Visible = (opacity >= 255);             SetStyle(ControlStyles.UserPaint, saveUserPaint);             if (Visible)             {                 Invalidate();                 Focus();             }         }         #endregion         #region Override control methods         protected override void OnResize(EventArgs e)         {             base.OnResize(e);             this.FitToContents();         }         protected override void OnKeyUp(KeyEventArgs e)         {             base.OnKeyUp(e);             FitToContents();         }         protected override void OnTextChanged(EventArgs e)         {             base.OnTextChanged(e);             placeholderToggle();             FitToContents();         }         protected override void OnCreateControl()         {             base.OnCreateControl();             placeholderToggle();         }         protected override void OnPaint(PaintEventArgs e)         {             using (var drawBrush = new SolidBrush((showPlaceholderText)? placeholderColour: foreColour))             {                 e.Graphics.DrawString((showPlaceholderText) ? placeholderText : Text, Font, drawBrush, this.ClientRectangle);                 // The underlying control is probably using TextRenderer.DrawText (gdi not gdi+)             }         }         protected virtual void FitToContents()         {             Size size = this.GetPreferredSize(new Size(this.Width, 0));             if (multiLine) { this.Height = size.Height; }         }         #endregion         #region Private methods         private Color fadeColour(int opacity, Color argbColour)         {             return Color.FromArgb(opacity, argbColour);         }         private void placeholderToggle()         {             showPlaceholderText = (Text.Length > 0) ? false : true;             SetStyle(ControlStyles.UserPaint, showPlaceholderText);         }         #endregion     } }

and

namespace CheckBuilder.Classes.Controls {     public partial class WrapLabel : Label     {         #region Private Declarations         private Color foreColour;         private int opacity = 256;         private int fadeSteps = 10;         private int fadeTime = 750;         private bool fading = false;         #endregion         public WrapLabel()         {             base.AutoSize = false;         }         #region Public methods         public async void FadeInOut()         {             fading = true;             foreColour = ForeColor;             int opStep = 256 / fadeSteps;             if (Visible)             {                 opacity = 256;                 opStep *= -1;             }             else             {                 opacity = 0;                 Visible = true;             }             opacity = opacity + opStep;             while (opacity > 0 && opacity < 256)             {                 Invalidate();                 await Task.Delay(fadeTime / fadeSteps);                 opacity = opacity + opStep;             }             Visible = (opacity >= 255);             fading = false;         }         #endregion         #region Design attributes         [Description("Fade time in milliseconds"), Category("Behavior")]         public int FadeTime         {             get { return fadeTime; }             set { fadeTime = value; }         }         [Description("Number of steps from transparent 0 to opaque 256"), Category("Behavior")]         public int FadeSteps         {             get { return fadeSteps; }             set { fadeSteps = value; }         }         #endregion         #region Override control events         protected override void OnPaint(PaintEventArgs pe)         {             if(fading)             {                 using (var drawBrush = new SolidBrush(fadeColour(opacity, foreColour)))                 {                     pe.Graphics.DrawString(Text, Font, drawBrush, ClientRectangle);                 }             } else             {                 base.OnPaint(pe);             }         }         protected override void OnResize(EventArgs e)         {             base.OnResize(e);             this.FitToContents();         }         protected override void OnTextChanged(EventArgs e)         {             base.OnTextChanged(e);             this.FitToContents();         }         protected virtual void FitToContents()         {             Size size = this.GetPreferredSize(new Size(this.Width, 0));             this.Height = size.Height;         }         protected override void OnCreateControl()         {             base.OnCreateControl();             this.AutoSize = false;         }         #endregion         #region Stomp on AutoSize         [DefaultValue(false), Browsable(false), EditorBrowsable(EditorBrowsableState.Never), DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]         public override bool AutoSize         {             get { return base.AutoSize; }             set { base.AutoSize = value; }         }         #endregion         #region Private methods         private Color fadeColour(int opacity, Color argbColour)         {             return Color.FromArgb(opacity, argbColour);         }         #endregion     } }

Tuesday, April 19, 2016

Structure C# like JavaScript

Been building a “proof of concept” Windows program that ended up including 9 different user controls, 4 Custom controls and some (gasp) printed output.

My custom controls included:
  • A Textbox with the equivalent of the HTML Textbox “placeholder”.
  • A custom Checkbox to emulate a material Design checkbox.
  • A Label control to automatically support word wrapped text – like HTML.
  • A Panel with rounded corners (which also required a couple of extensions to System.Drawing.Graphics to support drawing and filling the bounding round cornered “box”**).
There was the makings of a “rant” here; taking Microsoft to task on the lack of updates to the basic controls to meet modern requirements. I stifled it when I realised that rather than providing solutions matching what would, after all, be ever shifting changes in design “fashion” they had simply provided the tools necessary to remedy the situation. If you count yourself as a programmer then you have to accept the challenge to one’s design skills – mine do lack I admit.

What is the true difference between a Custom Control and a User Control?

This is a question I have seen asked in many places and so far all the answers I have seen have been “lies for children*” (simplifications that attempt to help someone make the right choice).

The key difference is that a User Control inherits from UserControl and a Custom control inherits from Control. That’s about it. You can add other controls to either and both will turn up in the visual designer “toolbox” in Visual Studio after a “build”. They can both place custom attributes and events into the control properties window as well as take advantage of properties inherited from their base types (these can also be suppressed if required).

Generally, you would choose to create a Custom Control if it is intended as a single control rather than a defined collection. (However there is no reason why a custom control might not act as a custom container for other Windows controls added, perhaps, at design time – like my panel with rounded corners). You would also select this type if you intend to manage drawing the control and also if you intend to subclass an existing control type.

User controls are a good choice if you are going to build a control using two or more pre-existing controls. This approach effectively provides a local name space and would normally provide code to handle individual sub-component events. User controls are better used when adding additional visual attributes during the paint event rather than managing the whole drawing requirement.

I think that my custom built control base type selection would normally be founded upon usage. “User Controls” being particular to a given application while “Custom Controls” might well be ported to other projects. Which probably does not help at all.

Why all those User Controls?

Well the program required repetitive collections of controls arranged as cards, subsections of cards and subsections of those subsections – all at the whim of the user. In addition, the program provides a preview pane to show how the output would be presented with support for interactive testing on the fly. At the design stage I could see that this had quite a lot of potential to get messy built using a conventional Windows Forms approach. I knew that if I built something similar in JavaScript then I would end up with a set of objects that managed their portion of the UI and dealt with the relevant events. Any required communication would be through callbacks to functions. The C# equivalent then became clear – a set of UserControls communication between themselves using Delegates. In fact some of the presentation side of the program had already been built using JavaScript and it was interesting to note the strong similarities between some of the C# and JavaScript code blocks.

That was how it worked out – a varying number of custom UserControls communicating with each other in the main but with the owning form being called upon to manage “global” functions like database saves. I can only estimate the savings in code lines as “substantial”.

Printing controls

Providing a print output started out feeling a bit “retro” as the most obvious way to accomplish the task was to print what was effectively an image of the program output preview pane (or at least the contents). I could just about recall that the Visual Basic 3 manual(s)*** had included some functionality to print a window content but had never actually done any such thing in all my years of code.

Establishing the basic mechanism proved simple enough as any control has the capacity to draw itself to a bitmap and the resulting bitmaps can easily be drawn to the printed page (scaling as required).

Worth noting that when doing this it is easy to add a drop shadow by extending the size of the bitmap by a few pixels. You can then draw some lines in a sequence of grey shades to create the shadow effect.

private Bitmap drawControlsImage(Control control, bool addShadow = false) {     Bitmap bm = new Bitmap(control.Width, control.Height + ((addShadow) ? 3 : 0));     control.DrawToBitmap(bm, new Rectangle(0, 0, control.Width, control.Height));     if(dropShadow)     {         Point pt = new Point(0, bm.Height - 3);         using (Graphics g = Graphics.FromImage(bm))         {             using (var pen = new Pen(shadow[0]))             {                 for (var sp = 0; sp < 3; sp++)                 {                     pen.Color = shadow[sp];                     g.DrawLine(pen, pt.X, pt.Y, pt.X + bm.Width - 2, pt.Y);                     pt.Y++;                 }             }         }     }     return bm; }

having previously coded

private Color[] shadow = new Color[3];         and shadow[0] = Color.FromArgb(181, 181, 181); shadow[1] = Color.FromArgb(195, 195, 195); shadow[2] = Color.FromArgb(211, 211, 211);

My printer output took the form of one or more major components (User Controls) emulating a Material design “card”. Finding the best way to order the cards in columns to optimise the printed layout looked set to be an interesting problem. It was analogous to the 2D rectangle bin packing challenge but modified in that there was at least an implicit order in the cards plus the target rectangle could be proportionately increased in size (at least conceptually) by applying ScaleTransform() to the output graphics surface and thus (negative) zoom. While thinking of the best approach I also thought about the issue as a “flow layout” problem – with the option to resize the logical bounding rectangle until a fit was achieved.

After a happy hour reading around the potential algorithms I realised that my requirement was very much a special case – a simple stacking exercise.

Which at first sight produced an acceptable result – but a little thought turned up the most obvious fault. Early positioning of objects in a column could undermine later reductions in the overall width.

What was needed was a two pass process with the first pass constrained to not building a column “higher” than the measured page “vertical” space. Second pass could then work as before towards the smallest packed size. So far, testing shows that the two pass method produces a better result and (on most occasions) a nicely balanced layout.

It amused me to note that I was quite happy to write the first draft of my stacking algorithm based upon a two dimensional array of SizeF (SizeF[,]) but when I came to re-write it to include two passes I changed the structure to List<List<SizeF>> (effectively a sparse array with inbuilt equivalents of Push() and Pop()) and achieved some code reduction as a consequence.


** Graphics extensions: You might like to take a look at this Code Project post by Arun Zaheeruddin which provides a fully overloaded set of rounded rectangle drawing functions that look enticing.

*** The VB3 manual set (included in the box) was probably the pinnacle of written language documentation. It was clearly intended to empower a generation of new Windows programmers – as indeed it probably did. We have come to accept that Google searches and Stack Overflow now provide detailed technical documentation for most things but this well written, detailed and accurate set in three (as I recall) volumes filled in the gap for a vast army until the World Wide Web was invented.

Monday, April 11, 2016

Dynamic C#

Just used the dynamic key word for the first time in C# - to get the compiler to relax a bit and act more like VB which allows late binding.

I was using a control Tag to store a couple of data items and as Tag accepts an object decided to use an Anonymous Object rather than define another class or whatnot. Something like:

aControl.Tag = new { Sequence = ItemSequence, id = ItemID};

But how to consume those object attributes in another function?

The IDE and compiler are not going to take kindly to anything like aControl.Tag.Sequence as that can only make sense at runtime.

The dynamic keyword comes to the rescue.

Here I am looping through a control collection filtered by my specific control type and in the order of the Sequence attribute in the anonymous object stored in the control Tag:

foreach(MyCheckbox mb in this.Controls.OfType().OrderBy(p => ((dynamic)p.Tag).Sequence))
            { … }

Also (after prompting by Visual Studio) used the following syntax to check if a reference to a delegate is null and execute it if it is valid:

notifyChange?.Invoke(ids);

You can add a conditional call to a delegate to a standard control event. For example:

myControl.SizeChanged += (object s, EventArgs e) => { checkSize?.Invoke(); };

The “is” keyword also came in handy with the ability to say something like:

if (myobj is CardSubsection){} 

and do something if myobj can be safely cast to the relevant class. “is” makes it way simpler (and produces clearer code) when passing one of a range of classes into a function via an object reference.

On the subject of clarity, I came across a little bit of code like this in a blog:

if (pa.TicksToGo-- > 0) {}

which is shorter than

pa.TicksToGo--; // or maybe pa.TicksToGo -=1;
if (pa.TicksToGo > 0) {}

but less easy to maintain at some future date. I know that JSLint would like to persuade you that -– and ++ are evil in and of themselves but I am not convinced – safe to use with due caution I think.

Looks like C# is going to “keep on giving” with the news that the dev teams for C# and VB are going to let the two languages drift further apart. Seems that developers using VB favour stability (presumably finding multiple language versions bothersome within their organisations) and the C# types are gluttons for novelty. Should be interesting.

Read more here: http://www.infoworld.com/article/3051066/application-development/microsoft-c-visual-basic-are-now-set-to-diverge.html

Sunday, March 20, 2016

Material Design Lite

A short and incomplete review

Material Design Lite (MDL) is a Google project intended to deliver a “Material design” look and feel to websites. This HTML, CSS and JavaScript project is a work in progress so shortcomings experienced might well be short lived so please bear in mind that this limited review was based upon version 1.1.2.

The project FAQ makes the following point. “We optimise for websites heavy on content, such as marketing pages, articles, blogs and general web content that isn’t particularly app-y.” However the supplied layout components respond well to variations in screen size and thus provide a solid basis for web pages targeting smartphones alongside desktop browsers. Where things are a bit clunky at the moment is in the area of dynamic page changes after the initial load and it may be that this will prove to be an enduring issue as (after all) that could be considered somewhat app-y.

The layout components provide a good range of page formats that can include navigation, a drawer, tabs, a straightforward and responsive grid and a footer. There is inbuilt JavaScript support for most of the layout options – you just set out your mark-up as directed and you are good to go. Colour choice is deliberately restrained but you can select from a range of themes to build a ‘custom’ CSS file.

Other components include a good variety of buttons, excellent cards, lists, menus, sliders, toggles and text fields. There is a dialog component but this is limited by only partial browser support although there is a polyfill available. The only obviously lacking component is a <select> but I found a CSS only solution that filled the gap nicely.

There is full support for the Google Material Design Icons although for MDL I only used the icon font having previously used the SVG and CSS sprite versions on other projects. As expected the font more than met my requirements.

I have a fondness for the consistency and flexibility of jQuery when it comes to DOM manipulation (however untrendy) and can confirm that jQuery works happily with MDL as long as you remember that jQuery objects are a collection of elements and that when calling an MDL function you may need to pass the first (or selected) element of the relevant collection:

Which brings me nicely to the app-y bits.

I first drafted out a reasonably complex static page to get a feel for the components and how they are laid out. The results were pleasing with just enough animation to give the design some life. Next I set up a skeleton MDL layout and attempted to build the same page content from JSON using JavaScript in the browser. Once the DOM has been updated with the new elements then you must call the componentHandler.upgradeElement() utility to apply the MDL layout magic to the new DOM elements. This worked very well for the set of cards I inserted together with their <div>s, text and checkboxes.

I then started to build the layout for a “visual” form constructor and started to run into some limitations. I envisaged my form constructor being based upon multiple tabs with the user being able to add a new tab to start a new ‘form’. I was able to inject the same structures into the DOM as would be required to initialise an MDL tabbed page but calling the upgradeElement() utility only gave me a partial result lacking in any MDL script support.

Currently most of the MDL documentation consists of asking questions on Stack Overflow.

There I learned about the downgradeElements() utility (although it does get a mention on the MDL web pages) that should be applied to a container that has already been sprinkled with the MDL fairy dust prior to calling the upgradeElement() function to rebuild things with any new inclusions. Indeed user HR_117 has kindly supplied a CodePen demo of this being applied to a set of MDL tabs. Trouble is, I found this failed when the tab container was itself wrapped within an MDL header (reported as an improbable bug so the true cause was masked in some way). Could very well be that I missed a trick here but how many hours can you spend on just one issue? My workaround (horror) was to create a bunch of tabs up front and to hide them on page load until they were required (a remarkably low overhead in reality as I was able to recycle previously used and subsequently closed tabs).

I hit similar problems with the clever “Text with Floating Label” component when injected into a pre-existing layout. After a short losing battle (quite possibly my misunderstanding or whatever) I switched to using this http://clubdesign.github.io/floatlabels.js/ alternative jQuery plugin that was a very nice substitute well within the style and spirit of Material Design.

In the end I got a functional page although many interactive elements of the layout were untidy. I knew I was in for a long round of CSS tweaking to get things polished but then again I have hit similar walls when working with Bootstrap. It was one of the reasons I had my first dalliance with Polymer where, by default, you can avoid the intrusion of inherited CSS – and the overhead of tracking down and fixing localised layout glitches. So maybe I should have a bash at mixing Polymer components into an MDL ‘frame’ – now that sounds like a winner and indeed should be very doable. A definite if I get to re-visit this specific task.

I am convinced enough by MDL as it stands to be sure of using it for aspects a commercial project currently in development. If you can avoid bumping into the limitations (and they will fast retreat I am sure – or maybe future documentation will show me, at least, the error of my ways) then it is very effective and can support a responsive web page that is bang up to date in design terms.

The experiment with MDL was a nice opportunity to exercise my JavaScript muscles after a bit of a lengthy break where C#, Java and even a touch of VB.NET had been the required tools. JavaScript is always a delight to get back to, as I find it a very satisfying language to work with. You can take liberties that other languages could not support but in the end I always feel a certain obligation to refactor code until it is fit for public display – even though I then feed it through Closure * to optimise browser download and execution. I found the (reasonably) recent addition of the Array.forEach() functionality (Mozilla polyfill available) very effective in reducing code complexity and enhancing readability – looks like ECMA 5 is creeping into my code. Array.find() is also another great – er – find.


* I do hope that Closure is extended to be a compiler for WebAssembly when that gets support across the browser range (looking good so far).

Addendum:

In many ways it makes more sense to implement the (rather restricted) forms designer app-y page that highlighted the MDL issues above as an app. Mobile development platforms are obviously going to be happy with the design concepts but how about good old Windows?

There is a Material Design skin project on GitHub addressing Windows forms projects although it has been a while since there were any updates. There is also a WPF focused project that looks a lot more complete and is certainly active - maybe attention has drifted from the former to the latter.

A little experiment showed that some Material Design styling can be added to a pretty basic Windows Forms app by overriding the Paint() events on some of the key components. Turning a panel into a card, for instance, does not require a lot of effort.

So, for Windows, a bit of mix and match should do the trick...

Wednesday, January 06, 2016

Kickstart a Graphics Book

Why you should fund this graphics Book




You are primarily a .NET developer and/or a regular user of C#, VB or even F# but do not encounter graphics as a regular requirement within your projects. You are now wondering why I am promoting a book about .NET graphics to you in particular. There are two reasons.


The first is just that when you do need to include some graphical techniques into some development task then you will need a book that can jumpstart your skills acquisition and/or provide just the code you need for an unexpected task.


The second is that keeping up with developments in User Interface (UI) techniques is going to make increasing demands upon graphical skills. The UI will advance faster than the Windows API (or under surrogate APIs like WPF) and users, product managers and designers are going to be making some tough demands upon all developers with code running in the “human domain” (shall we call it?).


Now you could prepare for this upcoming eventuality by brushing up on the maths involved - maybe starting here http://codeofthedamned.com/index.php/introduction-to-the-math-of - but it would be way easier to let someone else tease out the crucial bits and present the results alongside their immediate sample application.


Why this specific Book? The answer to that is very straightforward. The last time Rod wrote a book about graphics it was brilliant - and I have some authority here. When I first started Windows programming I chose a development area that was entirely graphical (an early mapping application). This was before the Internet and the vast resources that has exposed - then if you wanted to know how something was done then you had to sit and work it out for yourself. I had an early copy of Charles Petzold’s master work (Programming Windows) to lookup the (Windows 3.1.) API calls and, in extremis, access to Microsoft developer support via CompuServe [https://en.m.wikipedia.org/wiki/CompuServe ] (yup you could email the guys that actually wrote the stuff in those dim and distant days and if your enquiery was interesting enough they would reply). Rod had not yet written his book and I struggled to learn and to develop the techniques required to produce even moderately efficient graphics code.


Some time later Rod’s book was published and I grabbed a copy - partly as personal affirmation but also to learn just how often I had done it all wrong. Here was a definitive work that you could dip into as required and I found that thereafter I did, saving time in recalling (or discovering) the optimal path every time I needed to. As .NET took over as the best development framework then I found I was using the book rather less. I knew that Rod had an updated opus in mind but sadly he could not convince a publisher that a market of the scale they required was waiting for a new book to be written.


Here is our opportunity to fund a book of undoubted value for all - a definite plus on anyone's electronic bookshelf. Please join me in funding this Kickstarter. It will benefit the community of .NET developers as well as each individual “investor”.

Start here https://www.kickstarter.com/projects/2002981747/net-graphics-programming-omnibus/description and then please encourage others to join in and make sure this book becomes a reality.

Addendum:

Sadly this Kickstarter has been withdrawn.

Monday, December 07, 2015

Bare Minimum Software

With the financial year end fast approaching it is time to consider any kit purchases in anticipation of activities during 2016 (and I don’t get that list right very often in truth). A laptop upgrade to something smaller and lighter looks a good idea – particularly with one of the teenagers having a dead machine and thus more than grateful for my cast off (actually not a bad spec - certainly going to outperform most current low end offerings).

Buying hardware is just a few clicks in a web browser these days but the software component list promises a long session just running installs.

What would be on your list for a basic set-up?

I started with:




Plus some homemade tools and utilities.

Then there is the question of MS Office – will Gmail’s ability to display (say) Word documents fill in enough on incoming attachments to get by?

Something to read PDF’s I suppose.

Any suggested additions?

Factor in a few system updates to add to the confusion and slow package download speeds and this already looks like a day’s work

Edit:

Cracked and added a copy of MS Office. With the release of Office 2016 there are a lot of bargains around - particularly with earlier versions. In fact by a series of stock shortages I ended up with a copy of 2016 Pro for the price of a 2013 Home & Office on the residual market.

Tuesday, December 01, 2015

Raspberry Pi music

Somewhat off topic.

I am the very happy owner of a five year old Brennan JB7 digital music system which includes a couple of their speakers sitting on some carefully positioned stands. Recently Brennan sent me an email to introduce the new “B2” model – and very nice it looks but I wondered if the price at £579.00 for the top model was strictly warranted (with all due respect for the undoubted product quality).

As the new device seemed to be based upon a Raspberry Pi I ran through a few numbers for building something equivalent:

  • 1tb USB drive – retails at around £40
  • Raspberry Pi 2 – around £30 (often less)
  • Micro sd card - £5
  • Amplifier - £50
  • Power supplies- £20
  • CD drive - £8
  • Sundries (cables, case etc.) - £25

The Brennan box also has digital SPDIF and Line Out plus input options (all of which add a little to their costs) but I can’t see me making use of those in these minimalist times. I would just like to add digital radio and streaming to a disk based music playback capability.

In going through my shopping list I started to get enthusiastic and wondered just how good a device I could put together. Now I am a software man – so what am I doing with what looks like a hardware project? Well not much in the way of electronics – I just need to hook the main components together – so perhaps just a little soldering. The nitty gritty all looks to be in the software.

First task was effectively a side project. I felt that testing with speakers would be clumsy within the relatively confined area of my personal workspace – but I could manage headphones. So I decided to build a (near) empty box to hook some speaker terminals up to a headphone jack socket. With this I could provide an amplifier with something to drive and use the headphones to check the output without disturbing the peace.


The hard drive is one recovered from a retired laptop sitting in a box that adapts the SATA interface and makes it a USB drive – cost less than £3 on Amazon if I recall. The Pi can’t supply enough power for the drive through the USB port so there is an additional power supply that takes the 12 volts arriving at my “box” and converts it to the 5 volts required delivered through a USB connection and a standard “Y” USB cable.

The main power supply is a butchered mains/car adapter but an old laptop power supply would probably do – depending upon the voltage limits of any amplifier used. The HifiBerry Amp+ used here can happily operate between 12 volts and 18 volts and in turn powers the Raspberry Pi which helps reduce the complexity of the wiring rat’s nest you can see below. The idea was to start with longish hook-ups and then shorten them as required when the components get fixed into the case.


The case already has a power lead entry and an on/off rocker switch drilled and fitted. I have also drilled the back plate for the speaker connections and super-glued the main part of the loudspeaker terminal block to the outer face. 

I will have to decide if I am going to go with wifi or add an RJ45 fitting to the back plate for a wired Internet connection. The box will certainly need a ‘power on’ led indicator and I would like to add an infra-red remote control for volume and track skipping at least (although most control will probably be via a web browser interface). Given that the pi can join the network and expose the music storage location (so I can add albums) the inclusion of a CD drive is still a choice to be made (one that might tax my ability to neatly tackle the case modifications).

I first started the Raspberry Pi with a copy of the standard Raspbian Linux variant. This booted just fine but (of course) was not aware of the amplifier so a rather quiet initial test. I then downloaded and installed the distro supplied by HiFiBerry and tried that. Now I quickly ran into the limits of my Linux knowledge and had some difficulty in getting sound to the amplifier. In the end, I pointed the web browser at Amazon, found the latest Enya album page and clicked on the track samples. That initially got some loud clicks from the headphones but after adjusting the (xwindows) volume control I heard some very nicely re-produced music. That at least confirmed that the amplifier worked and that my speaker/headphone adapter also functioned as expected.

It was now time to widen the testing and this meant entering (at least initially) the sometimes murky world of “open source” software. I might have the odd moan in this section but I would not want to imply any criticism of any of the projects mentioned or the teams that labour on them. Without open source and free projects we would all be the poorer (financially and culturally). In the past I have published open source software myself and that’s all I am saying (I have shared the pain guys).

A few minutes Googling showed that the Pi has inherited a solid core of key music related components from Linux and that there were a number of live projects looking to deliver the ‘ultimate’ solution. Two projects (RuneAudio and Volumio) seemed to be forks of the much respected RaspyFi project that itself now looks defunct. These projects currently sport PHP web interfaces that in turn make use of the MPD (Music Player Daemon) music playing server. A few posts suggest that the RuneAudio/Volumio divide was a little bitter.

I tried the RuneAudio project first. This was partly because the main internet page showed a picture of the HiFiBerry amp I had selected with the implicit promise that this and other selected Raspberry Pi (HiFi) add-ons were supported. I was also slightly put off by the way the Volumio project announcement was ‘camped out’ on top of the RaspyFi site. First I had to download the RuneAudio distribution (based I think on ArchLinux) and pop it on the micro SD card. It booted and I quickly located the web interface from the browser on my PC. It was clear the RuneAudio had located the albums stored on the hard drive and so I had a stab at selecting one to be played. Not a lot could be heard. A Google search found a commitment to support my amplifier from early 2014 and some suggestions on how to manage the trick short of that support eventually being delivered. This involved installing some software and settings. I logged into the distro and started trying to follow the instructions. The first irritation was that my keyboard was not supported (symbols all over the place and inexplicably the <y> and <z> keys swapped) but I struggled on. Then the installations failed with 404 errors and so I decided to try my luck elsewhere.

Next up was Pi MusicBox which has a web site like something from the early 90’s but – it’s the software that counts. Unfortunately (for me at least) the software sort of booted and then hung. To be fair, this might have been because my Pi was the recent Pi2. So I tried again.

I tried Volumio after all. Another distro (based on Debian this time I think) to install on the old SD card. This distro booted and the web app quickly became available from my PC’s web browser. The web page presented was nearly identical to the RuneAudio one which probably means both are inherited in turn. A quick trip to the settings page located the HiFiBerry amp and well - it just worked, playing albums and tracks from the hard drive. I turned to the Web radio options and soon had the BBC World Service coming through load and clear (sitting there with my headphones it could have been a clip from an old 1960’s spy film – although there was no short-wave crackle and I was not wearing a hat).

So with a working sound source it was time to stop and take stock (actually I should try Mopidy soon as that is Python based and thus potentially susceptible to some constructive hacks).

On the face of it Volumio might look like a good start point for further development. I am no fan of PHP but how bad could it be? However, the Volumio project has now started a complete re-write using Node.js. Now that is probably the best current start point for a new project of this nature but this does imply that future releases might be “feature incomplete” and that not much attention is going to be paid to moving the current release forward (this is not a criticism – just a comment).

Everything tried thus far has been based upon a custom distro which presumably greatly simplifies the task of distributing software updates (as well as integrating any developments with core OS changes). This does make the process of adding custom additions (like a CD “ripper” and IR interface) a teensy bit problematic. I am assuming that the additional power of the Pi2 will compensate for any extra “drag” from any software mods and additions I make – but that we will have to see.

I will next dig out a pair of old Mission monitor speakers I have in the garage and give them a try – cranking up the output to see just what this little rig can deliver.

One sad note – I had hoped that this little project could deliver BBC radio output (2, 4 and 5 anyway) and thus preclude the medium term purchase of a DAB radio for my office space. Turns out that earlier in the year the Beeb retreated into their iPlayer and websites so they could control (read stop) who got to hear the output based upon a proxy for location (IP address). I am not at all sure how to view this – particularly as I though the BBC had a mission to communicate with the world and not just those who pay the UK license fee. The continued availability of the World Service channel just sort of underlines this parochialism.

Speaker Update:

The amp drove the speakers very well indeed. Loads of power - going to be as loud as anyone would want even in a large room. Not set up for a more discerning listening test yet but the "sound stage" was clearly evident and I could not detect any obvious amplifier induced distortion.