Friday 15 November 2013

Practice makes…?

One of the issues with working from a home office, as I do a great deal of the time, is “education” or “training”. I need to keep up to date. I need to learn about new things. The problem is that very few opportunities come and knock on my door. Of course, the internet is a wonderful thing, but it is like a good public library. If you like reading, you can get lost or even lose yourself in there. That’s where personal recommendation comes in.

Quite recently I found a education site called Udemy. Maybe you knew about it already, I didn’t. I decided to take a couple of courses to find out if I liked the experience: I did, and I do.

One of the courses I took was called “Performance of Speaking” by a man called Tom j Dolan (he writes it like that, so I will as well). I confess, that one of the reasons I took the course was that I was curious about how effective a course in such a subject could be as distance learning. All I can say is “it worked for me”.

I consider myself a reasonable public speaker. I am comfortable addressing a room containing tens or maybe even a hundred people. I haven’t tried addressing a stadium full yet but maybe that will come. Never-the-less I felt there was room for improvement.

Tom’s credentials are excellent and his approach is quite simple: public speaking is a practical skill. It is something which can be learned. He makes an important point: many of us are too critical of ourselves. We demand “perfection” (whatever that is). That is really an unreasonable demand we are making. Instead we should aim for improvement “Kaizen” as the Japanese would have it. Continuous improvement is a better goal than perfection. We can usually improve. The best musicians practice constantly.

There are many skills like this. We can learn the facts, we can answer questions and give the “right” answers, but to become really good at them, we have to practice. We (or at the very least, I) are creatures of habit. When we first learn a new behaviour it takes a great deal of effort. As we practice we get better at the execution of the behaviour, but not only that, we also find that we have more capacity to think about how and why we are doing it. Experience is a valuable thing.


As part of something I am doing at the moment, I need to record and then edit my own voice. Tom’s course has helped me get used to the awkwardness I felt. It hasn’t changed the content at all but it has improved the delivery and also how I feel about the delivery. 

Wednesday 6 November 2013

My first job (as a bottle-washer)

Just recently the great and good have been telling people on LinkedIn about their first day at work. I don't want to feel left out, so I thought I share what I remember about my first day at work. Actually, I've had several starts, all in different locations and under different circumstances. If you like, this is the "first, first"!

The job was supposed to be a fill-in while I retook an exam to improve on the grades which I needed to get into the university course I wanted . I remember that I had been given no notice of the interview - quite literally I had been asked "Can you go NOW?" and I'd gone - THEN! I had been unkempt and unshaven. The interview was on a Wednesday or Thursday and I must have said the right things, because they asked me if I could start on the following Monday.

The job was as a lab assistant in a small research laboratory. My responsibilities were to be quite varied, basically: do as you're told by your superiors (which meant almost everyone else!). I thought of it as being "bottle-washer".

On the Monday, the post arrived as I was about to set off for work. It contained an unconditional offer for the university course I wanted. They said they thought I could cope with the grades I had. So, I turned up on my first day at work and handed in my notice!

In fact, I told my new employer that if they preferred, I would "not start at all" and we could call the whole thing quits. They were really decent about it and said that I could have the job until I was due to go to university. Excellent I thought: relevant experience and two months of pay. Just the start I needed.

My job really was "washing bottles", and test-tubes and beakers and flasks and all the other paraphernalia of a chemical laboratory. I had to learn pretty quickly that we had some real nasties. I spent at least some of my time working with chromic acid which is really not good to come in contact with. One of the things we worked with was ion exchange resin which came as tiny polystyrene beads. I had to be really careful not to spill any of the wet beads on the floor because when they dried they became like little ball-bearings and on a hard lino floor the effect could be really quite dangerous.

Some pleasant memories are:

  • Being told off because "I walked like the lab manager" and the sound of my footsteps made some of my colleagues uneasy (to this day, I don't know what they were up to). 
  • Playing cricket in the park opposite the lab during lunch break. The wickets were old retort stands and the bat was kept in one of the equipment drawers.
  • And playing cards with the other workers on the wet lunch breaks.


Less pleasant memories are:

  • Doing seemingly endless titrations to get the "break-through point" on a sample ion exchange resin,
  • And the smell of the Amination Room where we kept the fume cupboards and unpleasant materials.
It was a good start!



Wednesday 30 October 2013

Have you considered using the cloud?

…I know I have. If you read the blurb being written by all and sundry (and now including me) you would be forgiven for imagining that the entire world either lives with its head in the cloud, or is considering doing so in the near future.

As I allow myself a limited budget of both time and money for “education” I’m careful what I spend it on.  Last Thursday (24th October 2013) I went to the “Cloud Success Roadshow” in Limerick. It was well worth my investment in time.

The roadshow is run by a company called Let’s Operate (http://www.letsoperate.com/) and although they show you their products, they definitely didn’t go in for the hard sell.

The roadshow reminded me of the factors I need to consider about my “IT strategy” as a whole. In a lot of cases the issue is not finding out what the characteristics of the “Cloud Option” will be, but finding out matching characteristics of the alternative. For example:
  • “The Cloud will cost x” (per user/month), but how much does my server actually cost me?
  • For that matter, how much is all that data worth (to me)?

Thinking properly about the Cloud will almost certainly make you think very hard about the speed and quality of your broadband connection and your dependence on it. If, like me, you live out in the country and work from home some of the time then this is important.  Never mind the quality, a year ago, I lost my broadband when some “eejit” demolished a telegraph pole just down the road! I was sent scurrying down the road to borrow a connection from an acquaintance in order to send a vital eMail reply. I’ve done something about that.  
  

There are still a couple of stops planned on the roadshow. If you live in Dublin or Belfast I suggest you consider spending an afternoon there if you have the time.

Monday 21 October 2013

Will losing constraints set you free?

I’ve been busy with a project, I’ve finally got round to writing this a week later than I intended…

In a recent conversation, someone pointed out that people sometimes remove “constraints” from a database in order to improve performance. This made me ask myself:

Is this a good thing, or a bad thing?

I have to admit that this is a technical change that I have considered in the past. Never-the-less, I have mixed feelings about it.

After some thought, my opinion is:
  • For many situations a constraint is redundant. The fundamental structure of many applications means they are unlikely to create orphan rows.
  • The cost of the constraint is in the extra processing it causes during update operations. This cost is incurred every time a value in the constrained column is updated.
  • The benefit of a constraint is that it absolutely protects the constrained column from rogue values. This may be particularly relevant if the system has components (such as load utilities or interfaces with other systems) which by-pass the normal business transactions.
  • Other benefits of constraints are that they unequivocally state the “intention” of a relationship between tables and they allow diagramming tools which navigate the relationships to “do their thing”. Constraints provide good documentation, which is securely integrated with the database itself.

In short:
  • The costs of constraints are small, but constant and in the immediate term.
  • The benefits of constraints are avoiding a potentially large cost, but all in the future.

It’s the old “insurance” argument. Make the decision honestly based on a proper assessment of the real risk and your attitude to taking risks. Be lucky!

More Detailed Argument

For those who don’t just want to take my word for it. Here is a more detailed argument.
Let’s take the “business data model” of a pretty normal “selling” application.

When we perform the activities “Take Order” (maybe that should be “Take ORDER”), or “Update Order”
  • we create or update the ORDER and ORDER_LINE entities, and
  • in addition we refer to PRODUCT (to get availability and Price) and presumably to the CUSTOMER entity which isn’t shown on the diagram.

When I translate this into a Logical data model, I impose an additional rule “Every ORDER must contain at least 1 ORDER_LINE”. The original business model doesn’t impose this restriction.

Remember some people do allow ORDERs with no ORDER_LINES. They usually do it as part of a “reservation” or “priority process” which we are not going to try and have here.

When the transaction which creates the ORDER and ORDER_LINE makes it’s updates, then it will have read CUSTOMER and ORDER, so it is unlikely to produce orphan records, with or without constraints.
On the other hand, by having the constraints we can document the relationships in the database (so that a diagramming tool can produce the ERD diagram (really I suppose that should be “Table Relationship Diagram”)).

I am left wondering whether it would be possible or desirable to enforce my  “Every ORDER must contain at least 1 ORDER_LINE” rule. I’ll think about that further. (Note to self: Can this be represented as a constraint which does not impose unnecessary and unintended restrictions on creating an ORDER?)

If we don’t have constraints and we have something other than our transaction which is allowed to create ORDERs and/or ORDER_LINEs (As I said, typically this would be an interface with another system or some kind of bulk load), we have no way of knowing how reliably it does it’s checking, and we might be allowing things we really do not want into our system. Constraints would reject faulty records and the errors they created (or “threw”) could be trapped by the interface.

    

Monday 30 September 2013

What if you don’t have a Data Model?

My previous post got me thinking. One thing leads to another as they say. The whole of my process really requires that you have a Data Model (aka Entity Relationship Diagram/Model and several other names). But what do you do if you don’t have one, and can’t easily create one?

What’s the problem?

Suppose you have a database which has been defined without foreign key constraints. The data modelling tools use these constraints to identify the relationships they need to draw. The result is that any modelling tool is likely to produce a data model which looks like the figure below. This is not very useful!

Faced with this, some people will despair or run away! This is not necessary. Unless the database has been constructed in a deliberately obscure and perverse way (much rarer than some developers would have you believe) then it is usually possible to make sense of what is there. Remember, you have the database itself, and that is very well documented! Steve McConnell would point out that the code is the one thing that you always have (and in the case of a database, it always matches what is actually there!).

To do what I propose you will need to use the “system tables” which document the design of the database. You will need to know (or learn) how to write some queries, or find an assistant who understands SQL and Relational Databases. I've used MS SQL Server in my examples, but the actual names vary between different database managers. For example: I seem to remember that in IBM DB/2 that’s Sysibm.systables. You will have to use the names appropriate for you.

The method

“Method” makes this sound more scientific than it is, but it still works!
  1. Preparation: Collect any documentation
  2. “Brainstorm:” Try to guess the tables/entities you will find in the database. 
  3. List the actual Tables
  4. Group the Tables: based on name
  5. For each “chunk”, do the following:
    1. Identify “keys” for tables: Look for Unique indexes.
    2. Identify candidate relationships: Based on attribute names, and non-unique indexes.
    3. Draw your relationships.
    4. “Push out” any tables that don’t fit.
    5. Move on to the next group.
  6. When you’ve done all the groups, look for relationships from a table in one group to a table in another.
  7. Now try and bring in tables that were “pushed out”, or are in the “Miscellaneous” bucket.
  8. Repeat until you have accounted for all the tables.

At this point you are probably ready to apply the techniques I described in my previous post (if you haven’t been using them already). You might also consider entering what you have produced into your favourite database modelling tool.

The method stages in more detail.

Preparation: 

Collect whatever documentation you have for the system as a whole: Use Cases, Menu structures, anything! The important thing is not detail, but to get an overview of what the system is supposed to do.

“Brainstorm:” 

Based on the material above, try to guess the tables/entities you will find in the database. Concentrate on the “Things” and “Transactions” categories described in my previous post. 

Don’t spend ages doing this. Just long enough so you have an expectation of what you are looking for.

Remember that people may use different names for the same thing e.g. ORDER may be PURCHASE_ORDER, SALES_ORDER or SALE.

List the tables:

select name, object_id, type, type_desc, create_date, modify_date
from sys.tables

(try sys.Views as well)
       

Group the tables


Group the tables based on name: ORDER, ORDER_ITEM, SALES_ORDER, PURCHASE_ORDER and ORDER_ITEM would all go together.

Break the whole database into a number of “chunks”. Aim for each chunk to have say 10 members, but do what seems natural, rather than forcing a particular number. Expect to have a number of tables left over at the end. Put them in a “Miscellaneous” bucket.

Identify the candidate keys, and foreign keys from the indexes

select  tab.name, idx.index_id, idx.name , idx.is_unique
from
sys.indexes as idx
join sys.tables as tab on tab.object_id = idx.object_id
where
tab.name like '%site%';

select tab.name, col.column_id, col.name 
from
sys.columns as col
Join sys.tables as tab on col.object_id = tab.object_id
where
tab.name like '%PhoneBook%'
order by 1, 2 ;
 

From attribute names and indexes, identify relationships



  • Sometimes the index names are a give-way (FK_...)
  • Sometimes you have to look for similar column names (ORDER_LINE.ORDER_ID à ORDER.ID)
  • Multi-part indexes sometimes indicate hierarchies.

 Push Out


Look for "Inter.group" Relationships




Thursday 26 September 2013

How to “dig into a large database”?

This posting was prompted by a question on one of the BA forums on Linked In. The original question was:

I gave an answer in the forum, but here is an expansion, and some ponderings.

First of all, let’s set out the terms of reference. The question asks about a “large unfamiliar” database. I think we can assume that “unfamiliar” is “one that we haven’t encountered before”, but what is “LARGE”? To me “large” could be:  
  •  Lots of tables
  • Many terror-bytes ;-)
  • Lots of transactions
  • Lots of users
  • There may be other interpretations

 I’m going to go with “Lots of tables” with the definition or “lots of” being:
“more than I can conveniently hold in my head at one time”

I've also assumed that we are working with a "transactional database" rather than a "data warehouse".

Preparation

Gilian, the questioner was given some good suggestions, which I summarised as "Collecting Information" or perhaps “Preparation”:
  • Understand objectives of "The Business"
  • Understand the objectives of "This Project" (Digging into the Database)
  • Collect relevant Organisation charts and find out who is responsible for doing what
  • Collect relevant Process Models for the business processes which use the database
  • Get hold of, generate, or otherwise create a data model (Entity Relationship Diagram or similar)


Of these, the one which is specific to working with a Database is the ERD. Having a diagram is an enormous help in visualising how the bits of the database interact.


Chunking

For me, the next step is to divide the model into "chunks" containing groups of entities (or tables). This allows you to:  
  • Focus - on one chunk
  • Prioritise - one chunk is more important, interesting or will be done before, another
  • Estimate - chunks are different sizes
  • Delegate - you do that chunk, I'll do this one
  • And generally "Manage" the work;  do whatever are the project objectives.

I would use several techniques to divide the database or model up into chunks. These techniques work equally well with logical and physical data models. It can be quite a lot of work if you have a large model. None of the techniques are particularly complicated, but they are a little tricky to explain in words.

Here is a list of techniques: 
  • Layering
  • Group around Focal Entities
  • Process Impact Groups
  • Realigning

Organise the Data Model

I cannot over-emphasis how important it is to have a well-laid out diagram. Some tools do it well, some do it less well. My preference is to have “independent things” at the top.


I’ve invented a business.
  • We take ORDERs from CUSTOMERs. 
  • Each ORDER consists of one or more ORDER_LINES and each line is for a PRODUCT.
  • We Deliver what the customer wants as DELIVERY CONSIGNMENTS. 
  • Each CONSIGNMENT contains one or more Batches of product (I’ve haven’t got a snappy name for that).
  • We know where to take the consignment by magic, because we don’t have an Address for the Customer!
  • We reconcile quantities delivered against quantities ordered, because we sometimes have to split an order across several deliveries.
  • That’s it!

Layering


"Layering" involves classifying the entities or groups of entities as being about:
  • Classifications
  • Things
  • Transactions
  • Reconciliations

Things

Let’s start with “Things”. Things are can be concrete or they can be abstract. We usually record a “Thing” because it is useful in doing our business. Examples of Things are:
  • People
  • Organisations
  • Products
  • Places
  • Organisation Units (within our organisation, or somebody elses)

Classifications

Every business has endless ways of classifying “Things” or organising them into hierarchies. I just think of them as fancy attributes of the “Things” unless I’m studying them in their own right.  
Note: “Transactions” can have classifications too (in fact almost anything can and does), I’ve just omitted them from the diagram!
Note: The same structure of “Classification” can apply to more than one thing. This makes sense if, for example, the classification is a hierarchy of “geographic area”. Put it in an arbitrary place, note that it belongs in other places as well, and move on! 

Transactions

Transactions  are what the business is really interested in. They are often the focus of Business Processes.
  • Order
  • Delivery
  • Booking

Where there are parts of Transactions (eg Order_Line) keep the child with the parent.

Reconciliations

Reconciliations" (between Transactions) occur when something is “checked against something else”. In this case we are recording that “6 widgits have been ordered” and that “3 (or 6) have been delivered”.
 If you use these “layers”, arranged as in the diagram,  you will very likely find that the "One-to-manys" point from the top (one) down (many) the page.

Groups around Focal Entities


To do this, pick an entity which is a “Thing” or a “Transaction” then bring together the entities which describe it, or give more detail about it. Draw a line round it, give it a name, even if only in your head!
  • "Customer and associated classifications" and
  • "Order and Order_line" are candidate groups.

Process Impact Groups


To create a "Process Impact Group"
  • Select a business process
  • Draw lines around the entities which it: creates, updates and refers to as part of doing its work.
  • You should get a sort of contour map on the data model. 

In my example the processes are: 
  • Place Order
  • Assemble Delivery Consignment
  • Confirm Delivery (has taken place)

It is normal for there to be similarities between “Process Impact Groups” and “Focal Entity Groups”.  In fact, it would be unusual if there were not similarities!

Realigning


Try moving parts under headers (so, Order_line under Order) and reconciliations under the transaction which causes them. In the diagram, I’ve moved “Delivered Order Line” under “Delivery”, because it’s created by “Delivery related processes” rather than when the Order is created.

Finally, “Chunking”

Based on the insights you have gained from the above, draw a boundary around your "chunks".
The various techniques are mutually supportive, not mutually exclusive. The chunks are of arbitrary size. If it is useful, you can: 
  • combine neighbouring chunks together or
  • you can use the techniques (especially "Focal entities" and "Process Entity Groups") to break them down until you reach a single table/entity.


Tools

My preferred tools for doing this are: a quiet conference room, late at night; the largest whiteboard I can find; lots of sticky Post-its or file cards (several colours); a pack of whiteboard pens; black coffee on tap and the prospect of beer when I’ve finished. For a large database (several hundred tables) it can take several days!

Once you've done all this, then all you have to do, is do the work!  


 I hope you have as much fun doing it as I had writing about it! J

Friday 20 September 2013

Security – Using Server Side security with MS SQL Server

There are times when I want to do something, and it gives rise to questions. Often I have to put the questions to one side in order to get on with the thing which is my immediate priority. When that happens, I put the questions to one side with the intention of coming back to them in the future. This is one of the occasions when I have had the opportunity to go back and answer some of the questions.

The questions in this case were:
  • How does Server-side security work on MS SQL Server? And
  • How can I control the access different users have to my database?


I’m not (have never been, and do not really intend to become) a DBA (Database Administrator). I understand a bit about “privileges” but in the past I’ve always had someone else “doing it for me”, and in any case , I’ve worked on other databases such as DB/2 or Oracle.

After a little bit of research and a bit of experimentation (fiddling around), I found the answer to my questions. Although it was hardly earth-shattering, the understanding is satisfying.
The key is to understand that in MS SQL Server the “Server Instance” and the “Database” are separate entities. It is possible to have several databases inside the same Server instance. In fact I do it all the time when I’m experimenting. This means that there are two separate things to define:
  • A LOGIN, which gives access to the Server Instance, and with Server-side authentication, provides the security, and
  • A USER, which belongs to the Database and is granted the Database privileges (including CONNECT).


Figure 1 Server-side security for MS SQL Server

Summary of the stages: 
  1. Ensure that SQL Server is set up to allow Server Authentication
  2. Create the LOGIN (in the SQL Server Instance)
  3. Create a USER corresponding with the LOGIN (in the Database)
  4. Grant the USER CONNECT privilege (happens by default, but can be revoked)
  5. Grant the USER the appropriate privileges on the Database



If you're interested in taking this a bit further, I’ve summed all this up in a video on YouTube.

Friday 6 September 2013

What is the point of Unit Testing?

I don't normally add two entries to my blog on the same day but something attracted my interest. Somebody asked the question: “What is the point of Unit Testing?”

Actually, they asked two questions:
  • Why do we perform Unit Testing, even though we are going to do System testing?
  • What are the benefits of Unit Testing?

To which my immediate thought responses were:
  • Would you deliberately make something from parts you knew were broken? Or
  • Would you make something from parts which you suspected were broken?

And then I thought "that might sound a little rude" and reconsidered…

Modern development methods have lots of benefits, but sometimes in flexible and rapid methods something gets lost. People forget why things are done. Or, if they’ve never been told, they wonder if they are worth bothering with.

Now, you should always question everything, but sometimes things are there for a good reason. If you plan to take something away;
  • Understand why it was there in the first place,
  • If it is no longer needed, explain why it is no longer needed.
  • Understand (and be prepared to live with) the consequences of taking it away.

An old-fashioned view of a System Development Process

(by the way, you’ll notice that some of this material has been re-cycled from elsewhere)

If we take a rather old fashioned view of systems development using a “waterfall” model, then we will have a number of phases (an old IBM Development Process, but does n’t really matter).
  • Each phase produces something, and the stage below it expands it (produces more “things”) and adds detail.
  • The “Requirements” specify what things the system needs to do. They also identify the things that need to be visible on the surface of the system.
  • For each of the things that need to be visible on the surface, we need an “External Design”
  • The “External Design” specifies the appearance and functional behaviour of the system.
  • For everything we have in the “External Design” we need a design for the “Internals”
  • And finally someone needs the “Build” what we have specified.

You can view this as a waterfall, down the side of a valley. The process is one of decomposition.

I don’t especially recommend “Waterfall” as a way of running a development project, but it is a simple model which is useful as an illustration.

The Testing Process


On the other side of the valley we build things up.
  • Units are tested.
  • When they work they are aggregated into “Modules” or “Assemblies” or “Subsystems”, which are tested.
  • These assemblies are assembled into the System which is tested as a whole.
  • Finally the System is tested by representatives of the Users.

The process is one of developing “bits”, testing the bits and then assembling the bits and then testing the assembly.

The assembly process (in the sense of “putting things together”, not compiling a file written in “assembler”) costs time and effort. Parts are tested as soon as practical after they are created and are not used until they conform to their specification. The benefit is that we always working with things that we think work properly.

In a well-organised world, you would like to think that the Users are testing against the original requirements!

Development and Testing should be mutually supportive


What should happen is that at every level, each component or assembly should have some sort of specification (it may be a very rudimentary specification, but it should still exist) and it should be tested against that.

In fact, there is a thoroughly respectable development approach called “Test Driven Development”. The idea here is that the (Business) Analyst writes a “Test” which can be used to demonstrate that the system, at whatever level, is doing what it is supposed to be doing. Of course, the Analyst may need help to write an automated test, but the content should come from the Analyst.

This approach is really useful all the way through the development process. It’s a really good idea if a developer writes tests for the code s/he is writing before the code! In fact, I have known places where they insisted that a test was written for a bug before the developer attempted to fix the bug. That way demonstrating the fix was easy: Run the test without the fix – Test demonstrates the bug. Apply the fix and run the test again.

The Cost of Not Doing Unit (or other low-level) Testing

All bugs are found at the topmost level, which means that they are found after the product has been assembled or “built” and then we have to work out where the error has actually originated.

The Benefits of Unit Testing

  • Bugs are found sooner, and they are found closer to the point at which they are created.
  • Unit testing lends itself to automated testing which can be integrated with the build process. Ask a professional Java developer about “JUnit” or a Python developer about “UnitTest” (one word).
  • Automated testing increases the chances of trapping “regression bugs” as code is enhanced and bugs are fixed.

All of the above mean that well-planned and executed Unit Testing results in:
  • Reduced overall cost
  • Improved product quality





Oracle and Courses

My personal development time last week was spent completing an online course "Oracle DBA for absolute beginners".

I wouldn't have described myself as an "absolute beginner", but I found plenty to enjoy in the course and came away having learned quite a bit about what is going on inside Oracle, and I assume most other database managers.

Circumstances influence what we do in life and so far I have had much more exposure to DB/2 and MS SQL Server than to Oracle. That hasn't been a decision on my part, simply the choices that had been made for the projects I was involved in.

In a similar way, I've spent much more time "dealing with users" as a Business Analyst, than I have working out how to manage the space requirements and performance of a database. It does me good to learn just a little about the things a DBA has to consider. I don't have to let those considerations govern what I consider the requirements to be, but at least I can understand where other people are coming from.

Taking the course led me to what you might consider "meta" thinking: thinking about not the content of the course, but the way it was presented and the platform Udemy on which it was presented.

I find Udemy interesting. It seems to work well. It certainly worked for me.

Udemy seem to be aiming to be a "neutral marketplace". The courses belong to the course instructors. Of course Udemy have standards for courses, but beyond the usual "fit to print" conditions, they are mostly technical standards (quality of video and sound) rather than subject matter related. In a similar spirit, Udemy promote the platform, but the promotion I have seen seems to be fairly neutral with regard to individual courses. On the other hand, instructors or course owners are completely free to advertise their wares elsewhere and direct potential customers into Udemy. It's a simple model which I think I will investigate further.

Friday 30 August 2013

Another video - Splitting an Access database

When I'm making something I sometimes learn new things. They say "you should never stop learning". I agree with "them", whoever they may be.

While I was working on the SOPAG project, I investigated "splitting" the Access database into:

  • Logic and presentation, and 
  • Data (database definitions and data values)

Components. I knew this could be done, but I had not spent much effort on it before.

The splitting itself was a straightforward enough exercise. Most of the work is done by Access itself. However, you might want to confirm that all the decisions it has made are sensible!

I decided to document the results for my own benefit, and then decided to convert a scrappy Powerpoint presentation into something a little more presentable to upload to YouTube.

Here it is: Splitting an Access database

I had fun doing the work to find out how it worked, and fun making the video. I hope you get something from watching it.

Thursday 15 August 2013

Collaboration software and badgers on the internet

A little earlier this week I attended a Webinar on PBWorks ProjectHub product.

Here is a link to a recording of the webinar:

I have used PB Works' wiki product for years (anyone who remembers me from LCCH may
remember me setting up a reference site which was based on PBWiki).

Right now, I'm using the "Freemium" version of ProjectHub to manage a small project I'm running. The project team is split across two countries (Ireland and Wales), so the opportunities of meeting face to face are limited, but thankfully we're all in the same timezone and all speak more-or-less the same language!

It's all going reasonably well, and I started to ask myself "why?"

I've used various bits of collaboration software on various different platforms for years now. Sometimes it works well for the project, sometimes it works less well.

With ProjectHub, I like the way that I can switch between a top-level view, to a short term  "what is the next focus" view, to an individual task view quickly.

One of the things which helps my team, is that we've known one another for quite a while and we've agreed:

  • the way that we are going to use the tool, 
  • what our roles are, and 
  • what our individual responsibilities are.


In order to keep things running smoothly, we have a role which I call "the badger". The badger's job is to spot when people have forgotten to complete updates (usually because they've been doing something more important and more interesting) and remind them. The good thing about something like ProjectHub is that the actual effort of adding a couple of line comment on a task, or ticking the "complete box" is easier than arguing, so things remain reasonably up to date. The badger doesn't have to be the PM (right now it is me though), in fact, it's better if it is someone else, because even the PM needs to be badgered sometimes!

Tuesday 13 August 2013

Almost my first video on Youtube

Sometimes things don't go quite as I intend. A little while ago, someone approached me with a potential project. Unfortunately I was too busy at the time to take it on.

The idea had tickled my fancy. It bubbled away in the back of my mind and as I had odd moments I created bits of it, as what I would describe as a proof of concept. It was a useful exercise in that it has reminded me of a few things, taught me some things about what Microsoft Access is good at and some things it is not so good at. Inevitably, there are some things I would do differently if I did it again. That's all right, because after all it was only a proof of concept, and there was no real input in the form of "requirements" anyway.

Having produced the thing, then I wanted to show it to an acquaintance. I messed about with a few things and after a couple of iterations, produced this:
SOPAG - A simple Access Application

Having produced the video, and decided to write a "business related blog" it seemed appropriate to share it here.

I wouldn't claim either SOPAG, or the video are marvelous, but I've learned a lot from both of them. In fact, I have set up a little project to take them both a little further.

But that is for the next instalment!