Unexpected Metonymy Linkages


Have you ever heard of metonymies? Well, maybe the term is not that well-known, but that’s how you call it when people refer to certain objects by the name of a brand instead of the product’s. At least in Mexico it’s very common to hear somebody refer to crayons as Crayolas, glue sticks as Pritt, tissues as Kleenex, or many other similar cases. I believe this is caused because some brands are greatly preferred over others (mainly because of quality) or because there are only a few offering such products.

“Tissues” flickr photo by kaktuslampan https://flickr.com/photos/kaktuslampa/1810941940 shared under a Creative Commons (BY-SA) license

Although not exactly a metonymy, the situation of UML diagrams is very similar in a sense. But before I explain myself, I want to briefly remind you what UML is. I have already mentioned this in other blog entries, but UML is the most prominent modeling language out there. It was developed by three people working for Rational Software between 1994 and 1995, then adopted by, you won’t believe it… the Object Management Group (OMG, I know) in 1997 and finally published by the International Organization for Standardization (ISO) as an official standard in 2005. Since then, many versions of UML have been released, with several additions and changes to existing diagrams, and with the creation of new kinds of diagrams as well.

I wanted to compare UML diagrams to metonymies because they have similar effects. If somebody needs a graphical representation of a system and their classes they’ll probably think that they need a class diagram (which is the UML equivalent to such kind of representation) because of one of the following reasons: they do want a class diagram, knowing that it’s the implementation in UML and that there are other alternatives; they want something similar to a class diagram, but since that’s the only name they know, they believe that’s what they are all called; or they believe class diagrams are the only alternative to what they need. Even if none of these is the case, the reference everybody will most likely use is a UML class diagram because of how common they are.

Do you get where my comparison is coming from? Even if a class diagram is not a brand, that’s probably what most people will call that kind of representation. So even if it’s not entirely accurate, it allows us to understand what is needed.

But now, think about this: if there’s already a diagram that depicts exactly what I need and it’s so popular that everyone around me knows it well enough to work as a point of reference, why would I want an alternative to that? To me, it makes no sense to look for an alternative to UML, it works well and practically everybody already uses it so, why waste time and effort looking for something else that does the same? I believe that’s the main reasoning for everyone to keep using UML diagrams. So, in the example I gave two paragraphs above, I believe the reason will almost always be that they do want a class diagram, you know?

So yeah, UML diagrams are super common, hence how useful it is to know how its classification works and some of the basic behavior/structure of some of its diagrams. Oh, would you look at that, that’s exactly what we were told to cover in this entry, so I guess I’ll have to talk about that now.

According to uml-diagrams.org, the classification of a UML diagram is defined by the primary graphical symbols shown on it. Here’s a representation of the current classification of UML diagrams, for this entry I’ll just mention three:

If there’s a sequence of message exchanges between lifelines the diagram would classify as a sequence diagram. Now, if you know as much as me before doing some research you may be thinking: what in the heavens is a lifeline? so I will answer that question for you: they are each of the individual entities that take part in any process of the respective system. Sequence diagrams are the most common kind of interaction diagram, with the focus being the already mentioned messages. According to the examples I saw, the interaction between entities and their respective restrictions are important to successfully achieve a goal or end a process. I understood it as a more complex and complete flowchart, even if there’s not exactly a flow to follow.

If the primary symbols are classes, then the diagram is, surprisingly, a class diagram. These may be easier to understand, since a lot of people are already familiarized with OOP and that kind of classes. I’ve used this type of diagrams since my second semester, they’re useful to get a general idea of the methods and attributes of certain classes (not so much of what they do, but of their definitions), along with the interactions/relationships between them.

Object diagrams are an interesting case. Since UML 2.4 (current version is 2.5) there’s no definition for object diagrams, but previous versions defined them as a way of depicting instances of classes with values for each of their attributes and relationships between them. UML 2.5 states that class and object diagrams are completely unrelated, but other UML sources say that object diagrams are kind of an instance of class diagrams (which is funny if you consider that objects are instances of classes themselves). Anyways, these diagrams can be useful to keep track of certain values at set points or situations inside a program or process. I’ve never used an object diagram, but it seems to be an interesting concept.

Because UML is such a big topic, this is the first part of two about UML, so I’ll wrap everything up in the next post. For now, I’d recommend checking out the link in the paragraph just above the one where I explained sequence diagrams, that website explains every classification considered in UML 2.5.

TL; DR: I said diagram a lot

Fine design


Has the sunlight ever bothered you while driving, walking, watching a game, or simply when going outside? I think we all know what that’s like, but, how do we usually tackle that problem? Hats, sunglasses and even our own hands are common alternatives that, if you think about it, are based around the same simple solution: use the shadow of an object to stop the light from hitting your eyes.

“LEGO” flickr photo by Pietro Zuco https://flickr.com/photos/drzuco/28678155725 shared under a Creative Commons (BY-SA) license

Design patterns are Object-Oriented tools that can be used in a similar way. When a developer faces a problem it’s a common solution to seek some help on the Internet, why? Because there’s a high chance that someone else has had the exact same problem before, and probably a solution has already been provided. 

Design patterns are references, templates that show a standard solution for a problem. They, unlike stackoverflow’s answers, only tell you what the solution to a specific problem is, and don’t include any kind of implementation or information on how to do it. More specifically, design patterns usually have some kind of structure: they have identifiers (names), the problem they’re solving, the detailed solution and, sometimes, tips for implementation and possible consequences of their use.

They are usually considered good practices, since they’re not arbitrary solutions, but well-thought and well-defined alternatives. That’s why it’s recommended to completely understand how they solve the problem and why it works instead of just learning methods and classes.

The concept of design patterns (at least when talking about software) became popular when the Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides) published a book in 1994 called ‘Design Patterns: Elements of Reusable Object-Oriented Software’, where they classified and described 23 different design patterns that could solve recurrent problems in programming.

Their classification consisted of three groups, with each one covering a different aspect of OOP. I will list the three categories and give an example of one of the corresponding design patterns.

First, we have creational patterns, which are related to instantiation and creation of objects and classes and how to manage these processes. These patterns control how new objects should be created and managed, some even define how many instances of certain class should exist at the same time. Speaking of which, the singleton pattern describes a way of having only one instance of certain object at all times, ensuring that once an instance exists, no others can be created.

“one” flickr photo by andrechinn https://flickr.com/photos/andrec/2893549851 shared under a Creative Commons (BY) license

Structural patterns help define relationships between objects. They usually dictate how two classes work together in the sense of what one represents to the other. These patterns are useful to later simplify interactions between objects, like the adapter pattern; its implementation is what we normally know as a wrapper and it basically “translates” an object into another format so otherwise incompatible methods can work on its instances.

“Gift” flickr photo by h0lydevil https://flickr.com/photos/praveenpn4u/4344132308 shared under a Creative Commons (BY) license

Finally, behavioral patterns work as guides to establish how two or more objects interact and communicate. Usually, their goal is to delegate responsibilities so no unintentional modifications happen. The observer pattern is a way of keeping track of certain changes that happen in an associated class, when certain action occurs, the observer notifies other objects and they act accordingly. It works as a messenger.

“eye” flickr photo by randychiu https://flickr.com/photos/randychiu/4302633525 shared under a Creative Commons (BY) license

For way more detailed information on the 23 original design patterns and concrete examples for each of them I recommend checking out this page.

To this day, the classification provided by the Gang of Four is still used. And 23 out of their 23 original suggestions are still used as standard solutions, although many others have originated since. As it happens in many other fields, there are new suggested categories and patterns, and, of course, criticism towards the original patterns have come up ever since they were first published.

A couple of things I noticed while trying to understand each pattern is that, just like in the example of wrappers I gave earlier, many concepts used in OOP probably originated from design patterns, and some of them even are taken as the best practices for some situations. Also, after reading the benefits section of this website, a particular statement stuck in my mind: understanding how each design pattern works instead of memorizing their implementation not only reinforces OOP concepts, as I said in the fourth paragraph, but it improves communication too, and as I said in my previous mastery entry, communication is key to an efficient teamwork. Knowing what others are talking about when just mentioning the name of a pattern saves time from explanations.

Tools like design patterns have lasted for as long as they have because they work, and they work pretty well. It amazes me how good and simple some solutions can be and, since their focus is OOP, I think they’re the thing I can get the most of among the topics of the course so far.

TL;DR: Efficient solutions are efficient.

laitraP tsriF ehT


In case you didn’t get the title of the post: this is a reflection of The First Partial.

It’s been a whole month since the semester started and, so far, four topics have been considered for the course. I’d like to summarize the important stuff that I’ve learned along the course about each of them, as well as the things I’ve grasped as the course went on.

But first:

“moc surprise” flickr photo by Capricorn Cringe https://flickr.com/photos/capricorncringe/3425386526 shared under a Creative Commons (BY-SA) license

I finally learned how to correctly include images in my blog posts. I know! What a shocker. But jokes aside, originally I hadn’t included any kind of picture in any of my previous posts mainly because I was too worried about not correctly crediting the original author. Fortunately, mister Alan Levine (not to be confused with Maroon 5’s singer, Adam Levine, as Google suggested) has written this script that can be used in some web browsers to help with Creative Commons License; it provides a way to automatically credit Flickr pictures, so thanks to him and also thanks to my teacher, Ken Bauer, for providing me with this tool. You can click on their names to check out their respective blogs.

Now, back to the topics themselves, I would like to synthesize the most important aspects for each of them. As I’ve said in practically every single one of my entries, I feel like analogies are the best way of understanding new concepts, or at least their importance in certain fields. That’s why I want to start by listing all comparisons I’ve already used, followed by their proper explanation regarding the topic of the corresponding week.


Software Development Life Cycles

The scientific method is a strict procedure followed by many. It functions as a guide that tells you what to do before, during and after an experiment; it encourages you to have some background on the problem you are tackling; it helps you figure out what to expect during your testing and it provides options on what to do whether the experiment is successful or not. 

“SCIENCE” flickr photo by chase_elliott https://flickr.com/photos/chasblackman/7006530174 shared under a Creative Commons (BY) license

SDLC are also procedures, but they support the software development process (duh). They’re practically alternative versions of the scientific method when we take a look at the features listed in the last paragraph. SDLC do consider stages before (definition of requirements), during (coding) and after (testing, deploying, updating) the development of software. More specifically, they encourage research as to what will be needed for a project and define its requirements with the help of the clients prior to the implementation; they help figure out what the expected outcome for certain test/case/version is to then compare it with actual results and, at last, have plans for maintaining, updating, fixing and/or reviewing the application depending on how successful certain test was.

As the name suggests, SDLC involve cycles and, therefore, a sequence of steps. The number of steps and each of their names change from one variant to the other, but the global idea is pretty much the same. Many types of SDLC exist, some are simple, some are repetitive, some are intuitive. Their main differences rely on how often you test or deploy your application, how much time you spend on each phase and whether you can go back to previous steps or not. Each variant may serve for different kind of projects, so it’s useful to know a thing or two about them.


Unified Software Process

Burgers! Who doesn’t like burgers? Even vegans have found ways to have burgers themselves. This popular dish has, of course, many different versions. Multiple recipes for burgers exist, and alongside different cooks, different tastes and different ingredients, it’s very unlikely for two burgers to taste the same, even for the same recipe. But in the end, whoever eats it will have enjoyed a nice burger, that’s for sure.

“Certified Angus Beef Burger w/ Avocado” flickr photo by ppacificvancouver https://flickr.com/photos/panpacificvancouver/5962021070 shared under a Creative Commons (BY) license

I think the Unified Software Process (USP) is similar, not only to burgers, but to most dishes. The basic structure or idea is the same for all variants, USP divides itself into four blocks: Inception, Elaboration, Construction and Transition; and the last three are also subdivided into iterations. This could be seen as the idea of the dish. You probably will include some kind of pasta and meat if you plan on cooking spaghetti and meatballs, for instance.

A USP has to consider disciplines to cover all aspects of the development process. They are then distributed across the four mentioned blocks to indicate how much relevance each of them will have in every stage of the procedure. Disciplines have to be well defined. I like to think of disciplines like if they were the ingredients for a recipe: you choose what to use, how much of it and in what stage of the preparation.

The key difference between SDLC and USP is that, just like recipes, USP serve as reference and not necessarily as a strict guide. Many refinements of the process exist, each with different disciplines, distributions and goals. Some tweaks may be made in order to better fit a project, just like you can add/remove an ingredient from a recipe to meet your preferences. Adaptation also allows USP to work on smaller projects, its versatility is what makes me prefer it over SDLC.


Use Cases

Burgers! But just the patties, yay! Wouldn’t you agree that a single patty could change a burger’s flavor completely? It doesn’t matter how good the rest of the burger is, if the “base” is bad the whole thing would turn out worse than it could’ve.

“Burger patties” flickr photo by star5112 https://flickr.com/photos/johnjoh/520999007 shared under a Creative Commons (BY-SA) license

Just like patties and burgers, requirements are the base of any software development process/life cycle. Your requirements define what’s important, what has priority over what, what’s completely necessary and what could be postponed.

Use cases are a term used to describe how every kind of user accomplishes different goals by using the application. These goals should cover all possible scenarios so, when listed, the developers can figure out how those goals could be achieved through their software, and could go from as simple as logging in to find a very specific section to do a very specific action.

Just to clarify, by users I don’t mean each individual using the application, but rather the roles they can take. Roles define what you are able to do and what you are allowed to see/change. For example, a student and a teacher don’t have the same kind of privileges; teachers should be allowed to change students’ grades, but the students themselves should only be able to consult them.

Thinking of situations like this help you come up with new use cases, the example above could be reinterpreted to define a new user requirement for both student and teacher roles. The best way of thinking of new situations, in my opinion, would be to find an actual representative of the role and ask about what they usually want to do, what they usually have to do and how their role functions, basically.

Use cases are an easy way of visualizing user requirements.


Modeling Languages

Communication is vital for people. Whether you’re reading something like this entry or seeing a picture like the one below, you’re probably getting a message. People can accurately get these messages only if they know the language they originate from. The more people understand a language, the better it is to use it since it requires less effort. 

“Smiley” flickr photo by katerha https://flickr.com/photos/katerha/4238730308 shared under a Creative Commons (BY) license

Modeling languages intend to formalize the way of representing multiple types of information so more people can let others know what is needed in, for instance, each software development process stage. The goal is to set a standard that everyone understands so representations are interpreted as quick as possible.

Diagrams and other graphical representations are popular ways of defining modeling languages, since shapes and colors are more easily recognizable and do not depend on languages to be understood. In fact, the most popular modeling language, UML, is mostly graphical.

UML diagrams are standard ways of depicting different tools and procedures in a more universal manner, and they cover practically every process related to software. From OOP class diagrams to representation of use cases, UML has defined structures, relations, symbols and many more things to express everything needed in each of them.

Modeling languages are a powerful tool that break language barriers and allow for better teamwork and more efficient communication.


What else?

Outside of the topics of the masteries, I’ve learned a few things related to OOP during the last couple classes and I now realize that most languages I had been told are object-oriented aren’t completely. We’ve been working with SmallTalk, a programming language where absolutely everything is treated as an object, because everything is, indeed, an object of some sort.

The way SmallTalk works is pretty distinct to what I’m used to, so I’ve been a little confused as to how efficiently/correctly make use of SmallTalk, but I think that’s something I’ll eventually figure out.

Also, I like that I’ve finally had the opportunity to practice my English once again after like a whole year of not really needing it. I think it’s cool.

For now, I hope that the next topics make me think of analogies as hard as these last four have. I think that’s a fun challenge and help both me and whoever might end up reading these entries to better understand all of these different concepts. And not only that, but the ones we’ve already gone through are quite useful, even if we don’t get to make use of them yet the fact that we are now conscious about their existence and how they roughly work is useful enough.

“Thumb up…” flickr photo by Guido van Nispen https://flickr.com/photos/vannispen/33686742682 shared under a Creative Commons (BY) license

TL; DR: I hope to keep doing as well as I think I have.

A way to dive into each other’s minds


Have you ever seen cartoons/comics in languages you can only imagine where they’re from? Were you able to, at the very least, detect what emotions were being portrayed by any characters?

Angry deer with japanese symbols.
“Mind-controlling deer, Nara” flickr photo by Ruth and Dave https://flickr.com/photos/ruthanddave/395769625 shared under a Creative Commons (BY) license

When you take a look at an image like the one above, you may notice that there are symbols in a language you may not be able to read! (It’s Japanese, by the way) But what you could probably tell is that these deer are quite angry. The lightning-looking marks and the frowned faces may have given that away, those are common ways of showing anger in a drawing, it doesn’t matter where you’re from.

Modeling languages aim to achieve something similar: a way to represent or structure a complex set of information in such a way that just a glance at the representation one could infer how everything has been constructed. Modeling languages define their own rules on how to represent things, some may be simpler than others, some may specialize in certain fields, some may even be only drawings, but all of them have roughly the same goal and stick to their strict norms.

There are several types of modeling languages depending on how the representation comes into play, and I don’t mean the rules themselves, but the method. The two most common types are graphical and textual, which are pretty much self-explanatory, but just to be sure:

Graphical types rely on shapes and lines and many other visual cues to show relationships, interactions and other possible meanings for whatever is being represented, while textual types only use, well, text; plain words and occasionally some symbols other than numbers or letters. There also exist other more specialized types, but I think that these two are enough for now.

At this point, a very logical question may have popped up in your head: is any of these types better than the other? I would say that, indeed, one of them is better. Let me elaborate: for most English speakers seeing a smiley face and reading the word happy could get the same idea into their head, but what would happen if a non-English speaker saw both the word and the drawing? Most likely only the smiley face will evoke the idea of happiness in their mind. What I’m trying to say is simple: shapes and lines are more easily identifiable by most people, so it would make sense to think that graphical modeling languages should be preferred. You probably could even tell so when looking at the deer picture once more, between the cartoon and the text you probably only got a message from the former.

“Smiley relaxation ball” flickr photo by sebgqc https://flickr.com/photos/42216816@N08/8634670769 shared under a Creative Commons (BY) license

One can’t really talk about modeling languages without mentioning the most popular and famous of them all: the Unified Modeling Language (UML). It is a graphical modeling language, since it’s mostly comprised by diagrams. It was developed by three people working for Rational around 1995, it has since become a standard for software modeling.

UML has a broad variety of diagrams and components, they may vary depending on what kind of process is being covered.

From an activity flowchart to the topic of my last post, use cases; UML has covered practically every kind of process that is related to software. It has defined symbols, representations and arrangements to easily visualize what is being worked on.

I do not plan on going through all of these variants (if you want to know more about any of them, I recommend checking out this post). Since this is an Object-Oriented course, I figured that the most appropriate thing to do would be to focus on class diagrams.

Class Diagrams are yet another type of diagram defined in UML. It includes, in one way or another, graphical representations of most (since I wouldn’t dare to say all) concepts in the Object-Oriented Paradigm, from attributes and methods to class inheritance.

Object-Oriented programming works around the concept of classes and their relationships, and that’s what UML has attempted to represent with this type of diagram (hence, the name). Classes are defined in squares divided into three horizontal segments: the name of the class goes on top, the attributes along with their type come next, and finally the methods and their return type are specified. For both attributes and methods, a symbol just before the name represents whether it’s public, private or protected. Related classes are connected with lines. A line ending in a black diamond indicates composition; but if the end is a white triangle, then it means inheritance. A minimum or maximum number of relationships between classes can also be specified.

A Class Diagram example.
“uml ECE3574” flickr photo by john7575doe https://flickr.com/photos/68506614@N02/6233298408 shared under a Creative Commons (BY) license

There are obviously many more rules to class diagrams, but to explain it all is not the intention of this entry. A person familiarized with UML could take a quick look at the class diagram for a random project and immediately identify its basic structure, even if the words themselves are in another language, they could tell what’s related to what. Although it would obviously help to know said language. Here is a deeper explanation on what class diagrams are and how they work. Fun fact: Lucidchart is actually the website where I create all of my diagrams for whatever graphical modeling language I need.

There is also a variant to class diagrams, it’s known as an object diagram (obviously also closely related to OOP) and it focuses on instances of objects rather than on the class definitions. I thought it was worth mentioning.

To sum all of this up, modeling languages unify the way of representing information. They define rules so we can understand structured projects in an easy way. There are graphical and textual types, but IMO graphical is better. Just ask UML.

TL;DR: 😀 > happy

You could definitely use some cases . . .

Do you remember the burgers I was talking about last time? I feel like I need to bring them up once again for this particular topic. Even if there are multiple recipes for the same kind of food, there will always be key elements that could mean the difference between what’s good and what’s not, in the case of burgers, I would say that the patty is the main ingredient: even if everything else is just fine, a raw or overcooked patty could ruin it all.

“Burger patties” flickr photo by khawkins04 https://flickr.com/photos/khawkins04/5969315133 shared under a Creative Commons (BY) license

So far, I’ve talked about two ways of assist software development, but this time, I won’t be covering something as wide as the whole process, but rather a method that, just like patties, draw the fine line between a well and a poorly managed project.

If you recall from the SDLC entry, one of the stages in the software development process involves the definition of requirements for the respective project. These requirement, just like a patty, must be correctly done if you want an, at least, decent application.

When you try to list requirements for a project, some may slip your mind. You don’t want this to happen: this could later translate into a missing function inside your software or into your users not seeing their needs fulfilled. Fortunately, there’s a way to help yourself know what requirements have been completely covered and where there’s a lack of implementation.

Use cases are methodologies used to find, clarify and classify user requirements. The way I see it, they’re one of the most simple methods present in any stage from the software development process, since they involve very intuitive diagrams that make it easy to visualize what’s missing from certain scenario.

This method consists of imagining all the possible interactions between users and the software, what goals could everyone involved have in each of these interactions, how could they be achieved and what concrete actions are to be taken. Particularly, use cases are the situations for which you have multiple possible paths that could take a user to their goal. Notice that I previously used the word scenario. User case scenarios are the specific paths selected to achieve goals. This page includes clearer definitions for use case components, if you wish to know a little more about that.

The diagrams used for use cases have many elements: every type of user (or actor) has to be present; the relationships between actors is important too; generally, situations are clear and point to all the involved actors along with what action is being performed by each of them, conditional or additional events may be added too.

“jobs” flickr photo by OkACTE https://flickr.com/photos/okacte/6799778407 shared under a Creative Commons (BY) license

Use cases have to follow certain course of action, it’s important for situations to trigger other events, for which different actors could be involved.

But here’s where the fun part comes in. You can tackle this situation in, at least, three different ways: you could take each of your actors and ask yourself “What are this actor’s chores?” to then put them together into the different situations to be considered for your project; you could go through all your requirements and associate each of them to certain user case; finally, the one I would least recommend is to think of every situation that could come up during the operation of your programm to then associate actors to each of them. It doesn’t matter what technique you prefer, this way of working with requirements makes it very clear as to what is missing from your diagram once you start simulating different possible situations.

There are, at least, a couple of possibilities in which you don’t have to think about the scenarios all by yourself. Your clients could have already worked on it and definded by themselves all possible paths, outcomes or situations. You’d only have to break them down into simple actions for every actor involved.

If you don’t have a client or just if you have the possibility I’d say it’s better to find actual people who could represent the actors considered for your project, preferably if they have actual experience with their role. You can simply ask them what kind of things they have to do, from the most basic to the most obscure and little known. They could have an easier time thinking of scenarios that could come up during the use of your software.

One thing to have in mind, though, is that it wouldn’t be as useful if you took too many cases or too many variables. It’s better to keep it as simplified as possible so you don’t have to navigate through 200 different scenarios.

Just because I feel like this is necessary: the most common repreesntation of use cases is UML (Unified Modeling Language), which consists on a graphical representation where you have: a stickman for every actor, an oval for each action and different kinds of lines to represent maybe consequences, variants or optional paths. This particular topic (UML) is covered in the next mastery, so I’d recommend checking that one out.

For now, I want to include an example of a use case diagram I made for another course some time ago. It’s in Spanish and there’s only one actor, but I think it could serve as a reference:

Doctor does lots of stuff

Like every other method I’ve talked about so far, it is a very useful tool if you use it properly. Since it already involves actors you could think of it as a play. If you have too little actors or even if you have too many the experience may be compromised, if there’s no scenery or if it doesn’t fit the theme at all you may be confused as to what’s happening. Always keep in mind that you will also have to review and revisit your diagram over and over again, so it’s better for you if you just keep it simple.

TL; DR: Use cases are a way of defining who can make what and with whom.

A Recipe for Success: Unified Processes

Have you ever had burgers? You know how every restaurant, chef or cook has their own recipe for the same thing? Regarding burgers, some include cheese, some add pickles, some have vegetables; even if two different recipes have the same ingredients, the amount which is used or the way they’re prepared may differ from each other; the person preparing it may even add its own twist! This makes it too unlikely for two burgers to look the same, let alone the taste.

Last week I wrote about the Software Development Life Cycles and how they help in the process of, well, developing software. This time the topic is kind of similar: Unified Software Process (USP).

Once again, the first paragraph may not have made a lot of sense, but it allows me to explain the difference between SDLC and USP. USP also aids in the software developing process, but here’s the twist: just like recipes, there are lots of them for the same kind of thing; they can be followed rigorously or tweaked a bit to better satisfy one’s preferences and they may be used for either tiny or large groups of people. Allow me to elaborate on each statement.

First, there are many refinements of the Unified Process, but they all intend to serve as a reference rather than a strict guide. One of the characteristics of this process is the presence of four blocks into which the development is divided: Inception, Elaboration, Construction and Transition. (For a brief explanation of each of these blocks you can go to this site).

At the same time, the last three blocks (and sometimes the first one if there’s a large project involved) are themselves divided into iterations. For every of said iterations it’s necessary to have made a significant progress. Now, the difference between refinements is what disciplines are considered for each of them.

Disciplines are activity patterns that altogether cover every aspect of development. Some versions of UP englobe many of them into one, while others break them down as much as possible. They have their own definitions and goals for each discipline considered, so at least the differences can be cleared up.

Okay, so let’s work with an example and say a company is planning on using a specific refinement of the USP for developing new software. When I said that one recipe could be reinterpreted or changed in order to meet one’s desires, I was trying to make an analogy with this example. One of the advantages of the USP is that refinements are adapted to the necessities of whoever is developing, usually defining goals, roles, methods and time for the development itself; along with how the aforementioned disciplines are distributed among the process’ iterations.

I’ve said that a company may have chosen a specific refinement, but… what refinements exist? Which one could have been chosen? One of the most popular versions of USP is the RUP, which stands for Rational Unified Process. The Rational in the name doesn’t exactly mean that it’s based on rationality, that’s actually the name of the company that made the refinement. RUP defines nine disciplines: Business Modeling, Requirements, Analysis and Design, Implementation, Test, Deployment, Configuration and Change Management, Project Management, and Environment. Since RUP is the best documented and widely used version of the USP, going with it is a safer choice. An in-depth description and analysis of the RUP can be found here. But do have in mind that other versions exist, such as Agile and Basic Unified Processes.

Finally, the last of my comparisons is quite simple, but I felt it was as important as the last two. As a consequence of the last characteristic, USPs can be used for projects of any size. That’s it, I was not kidding.

But for real, this is a feature I like. One can get used to USP by applying it to personal or very small projects, and that could be useful in the long term.

USP are a yet another way of formalizing the software development process. But the customization makes it, at least for me, a more tempting alternative to SDLC. The graphic that every refinement uses to graphicly depict the distribution of disciplines makes it slightly more intuitive for me. So USP is above SDLC in my book. I got most of my information from this website, it has extensive information about every topic I wanted to cover with this entry, so this is the one I recommend checking out the most.

TL; DR: USP is like a customizable and scalable SDLC, kind of, sort of.

Analize, code, test for a bit, finish, update, rinse and repeat


Have you ever been in a science fair? Whether you participated in one or only went as a spectator, the scientific method must have been something you heard a lot of. Said method is widely used by scientists around the world because, even if you didn’t realize back when you first learned about it, it is quite helpful. It can serve as a guide to know what you are looking for in an experiment; it lets you see what your original thoughts were and review them in case they didn’t completely satisfy your objective and something has to change; you can also take note of what went wrong in your experiments so neither you or others make the same mistakes; among several other possible advantages.

Now you may be thinking “What is this guy talking about? Wasn’t this supposed to be an OOP themed blog? Why is he talking about the scientific method?”. Well, I’ve always thought that comparisons and analogies are very efficient ways of getting ideas across, and I believe this is no exception.

I’ve stated that the scientific method is useful in many ways, wouldn’t it be awesome if we, programmers and developers, had a similar way of working with things? Because we don’t really do experiments per se, but what we do do is applications and programs, along with tests and newer versions that follow. A (somewhat) equivalent to the scientific method in our field is this week’s topic: Software Development Life Cycles (SDLC).

SDLC are methodical processes used to assist programmers with the software they’re developing. They serve as a plan to follow in order to get the best possible product. In what ways do SDLC help? Let’s go back to the last sentence in the first paragraph of this entry and make use of the previously mentioned analogies:

“A guide that lets you know what to look for in an experiment” could translate to “A guide that lets you know what the requirements for your project are”; “Original thoughts that need to be reviewed and/or changed”? Easy: prototypes or versions of your product that you can revisit and update; “Something that went wrong with the experiment” could be an equivalent to the testing and maintenance phases present in SDLC. While I was trying to come up with an analogy to connect the scientific method and SDLC I stumbled upon this entry from Science Buddies, I recommend checking it out for a way more detailed comparison between every step of both processes.

You could have noticed that, up until this line, I’ve been referring to SDLC as a plural noun, and that is because of a very simple reason: unlike the scientific method, there is no standard Software Development Life Cycle. Multiple variants exist, and there are even some versions of each variant. Before I mention some of these variants and what makes each one different from the others I have to clear something up: The number of phases/stages is not consistent between some definitions, I’ve seen from as little as five steps to the double of that, so, for the rest of this entry I will be considering the seven stages enlisted in this Stackify post:

  1. Identify the current problems
  2. Plan
  3. Design
  4. Build
  5. Test
  6. Deploy
  7. Maintain

The stages are practically self-explanatory, but if you wish, you can visit the Stackify post I just mentioned, it includes very concrete questions that must be answered for each of the phases. Questions like that help clarify a lot.

This week I heard a classmate say that SDLC are a natural approach to tackle software development and that statement stuck to my mind. I really believe that’s the case: any experienced programmer knows that it’d be a pain to just mindlessly code a whole application at once to then find some bug and try to solve every problem with the code. Developers have some notion of what each phase means and why they follow certain order.

Back to the different versions or models of the software life cycle: each one of them has a different approach. I’ve noticed that the main variants are whether you can go back to a previous phase or not, how often you test your progress and how often you deploy your application.

The most straightforward example, which happens to be the most straightforward model, is the Waterfall Model. Can’t go back, test one per cycle and focus in only one phase at a time.

The iterative model focuses on developing one feature or version for each cycle to then start again. Working on fulfilling all of the requirements little by little. Its cycles tend to be quick, and the testing is done after each version is deployed so with every new iteration bugs can be fixed.

The spiral model is very popular, it has its own four phases which are iterated over and over throughout the course of development. With constant testing and a lot of precaution this model is repeated as many times as necessary to achieve the desired product.

The decision of which is the best model for your project depends heavily on what conditions you are working under, Robert Half has summarized what his recommended model is for several cases and has included even more models that you can check out as well.

I think it’s important to clarify that, even if there’s recommendations as to what models use for each case, each developer has the final word on what methodology will be used for their product. Whatever model you choose, just make sure you do it right and everything should be fine.

TL;DR: Useful Software Life Cycle methods are useful.

Mastery lv: 00 – I’m just getting started


My very first blog entry.
There’s not much to say at the moment. I just finished creating this blog and I know it won’t be the prettiest in the class, but even so, I will try to come up with the most creative titles.

For instance, had the instructions of this activity not said to use the same title as the assignment, I would’ve gone with something like “The Christopher Colombus to the entries in my blog: is said to be the first when it really is not.

I already created my Hypothes.is account, so there’s that too.

me rn
Create your website with WordPress.com
Get started