€ngineering economic$


So this is a course about project evaluation and management with a very heavy focus on software development/engineering. I don’t know about others, but for me, when I see the word management I always think of two things: time and money. Some people even say that time IS money, so that narrows it down to just that one thing.

“Money” flickr photo by Worlds Direction https://flickr.com/photos/worldsdirection/34776291250 shared into the public domain using (CC0)

Money is one of the most important factors when it comes to making decisions. According to the Software Engineering Body of Knowledge (SWEBOK) Wiki, economics is the study of value, costs, resources, and their relationship in a given context or situation. Such studies, along with a proper analysis of similar projects, benefits, externalities, costs and other prices, can lead to better perspectives and well informed decisions that can range from whether or not to start a project, to whether or not one should be terminated for good. Engineering economics is basically applying these studies for a particular engineering field.

If you, like me, plan on having (or already have) a career revolving around development, then you may have thought at some point: Finances don’t concern me that much, I just need to get an expert to do all of the needed calculations for me. To be honest, that used to be my mentality until very recently, but I’ve come to realize a couple of things that I’d like to elaborate on.

First, things aren’t as simple as just doing a couple calculations. There are a lot of elements that must be taken into consideration before making important decisions. The Wiki page I mentioned earlier contains tons of concepts that are imperative for economics.

If you ever read one of my UML entries you must know that there is an overwhelming amount of different UML diagrams. Well, economics has somewhat of an equivalent to that: economic analysis methods and techniques, but, unlike UML, most of them must be done to get a complete perspective of the situation. The website of the University of Maryland has a section dedicated to engineering economics, and it also lists several steps and studies that are very useful not only for software engineering, but for general projects and fields. Uncertainty is mentioned both in this website and in the Wiki. It’s said that uncertainty and risk are common situations and, as such, a couple of estimation techniques are needed to deal with them. With this many considerations (and many others I probably didn’t include) the idea of leaving all of this to an expert doesn’t sound half bad, but that segways into my other point.

Yes, companies do have different departments so if you are part of the development team finances shouldn’t be that relevant for you. But what if you have your own company? What if you are starting a project by yourself? Wouldn’t it be useful to have at least some notion of how engineering economics work? You don’t want your efforts to be in vein or have someone scamming you because of your ignorance on the subject. What I’m trying to get to is: there is no useless knowledge, so at least understanding some basic concepts and procedures would definitely make a difference in the previous scenarios. I’m glad I’m taking this course for the semester, I feel like it will do me good.

TL; DR: Engineering economics are vital even for us software developers.

DEADLINE, chapters 1 & 2. “Is diet Dr. Pepper okay for you?”


Imagine having to sing a song about you being laid off and getting kidnapped right after while you’re trying to sleep. That’s what happened to poor Mr. Tompkins, the protagonist of The Deadline: A Novel about Project Management by Tom DeMarco.

“Sleeping” flickr photo by Mussi Katz https://flickr.com/photos/mussikatz/13842448454 shared into the public domain using (PDM)

I had never considered that a novel that teaches about management could exist. I always thought that if I were to learn about this topic (or any other, honestly) I would just get a textbook or one of those “Four things you must know before starting your business” kind of books. To me, this seems like a very interesting format, and I hope that the story stays as interesting as it started.

Mr. Tompkins is introduced as a person who falls asleep during boring lectures and refuses to join very forced choruses, but I don’t blame him, I would also be bummed out if I knew I would need to find a new job soon. He meets a mysterious lady during one of those boring lectures, her name is Ms. Lahksa Hoolihan, and she introduces herself as nothing other than an industrial spy. Mr. T is taken aback and starts questioning her.

We learn that Ms. Hoolihan works from Morovia and that she has the ability to pinpoint people whose presence keep the companies they work for afloat. Mr. T is shocked by her skill, but he gets a bigger surprise when Ms. Hoolihan confesses she has come for HIM. Mr. T is told he is a very good manager, and that his skills are required to help Morovia’s government. At this point, Mr. Tompkins would seem like a very clever person, but he didn’t seems that brilliant to me when he just casually accepted a drink from a SPY who happened to bring along the ONLY thing he drinks. It was no surprise when the can of diet Dr. Pepper turned out to have some kind of drug that put Mr. T to sleep just after finding out that, in fact, he had met Ms. Hoolihan before. Chapter 1 ends right after Mr. T falls unconscious.

At this point, I am personally very interested in seeing what happens next. Am I going to see Mr. T at his fullest anytime soon? Is he really that good at management? Does he know how skilled he is? Where did he meet Ms. Hoolihan? Oh, boy, I sure hope chapter 2 answers these and many other questions I have!

Chapter 2 went something like this:


Book: What did you say? You want to know how strong that drug was?

Me: What? No, that’s not even close to what I-

Book: What about a detailed description of Mr. T’s trip?

Me: I never mentioned such thi-

Book: … what about smells?

Me: Wha-

Book: DROOKTHE


Yes, chapter 2 basically talks about how the drug is making Mr. T feel. Fortunately, the given description seems to kind of reflect reality and how Ms. Hoolihan is taking Mr. T with her all the way to Morovia while he has no idea of what’s going on. It’s also during this chapter that we are told how these two first met. In just a few words: they both were taking a lecture about Project Management, but the person in charge, Mr. Kalbfuss, seemed to have no clue of what’s truly important for future managers to learn; Mr. T realizes this, critiques the agenda and makes fun of the discrepancy between the title of the lecture and what it was really about. Mr. T leaves, but not without catching the attention of Ms. Hoolihan.

Present-time Mr. T is inside some kind of loop, living and reliving the moment he confronted Mr. Kalbfuss with one word echoing inside his head: Administrivia, his suggestion for an appropriate name for the lecture.

In all honesty, I’m liking this book so far. There haven’t been many teachings outside some subtle or broad topics, but I’m hoping that chapter 3 gives Mr. T a chance to shine and start flexing those management muscles. Hopefully I’ll get to talk about what I learn from the following chapters in the rest of my reflections.

“312/365” flickr photo by Kim Siever https://flickr.com/photos/kmsiever/3439553400 shared into the public domain using (PDM)

Un final inevitable


Esta probablemente será la última publicación que haré en el blog, así que intentaré que por lo menos sea un poco más memorable que el resto. Antes de comenzar con la reflexión como tal, quiero hacer saber que estoy intentando lo de escribir con Comic Sans para ver si efectivamente redactar resulta más fácil.

“Quill” flickr photo by rachaelvoorhees https://flickr.com/photos/rachaelvoorhees/2551532922 shared under a Creative Commons (BY-SA) license

Tener un blog por primera vez fue toda una experiencia. Incluso si no decidí exactamente de lo que hablaba “cada semana”, me agradaba tener la oportunidad de redactar y dar a conocer mi punto de vista respecto a distintos temas. Me es muy entretenido expresarme e idear analogías para ayudar a explicar ciertas cosas.

De hecho, el próximo semestre comenzaré a impartir materias en el sistema de preparatoria en línea de Prep@net. Me tocará ser tutor para algún grupo y, sinceramente, estoy emocionado. Espero que todo lo que practiqué con este blog me sirva para impartir clases.

Después de la reflexión del segundo parcial solamente hice tres masteries. Haré el típico resumen de los tres temas correspondientes, pero al final tendré que darle un énfasis mucho mayor a lo que me dejó el curso en general. Iré, como siempre, en orden cronológico.

Revisión de código

Honestamente, este fue uno de los temas que más me interesó mientras lo investigaba. Ver las múltiples técnicas, prácticas o consejos para algo tan rutinario como lo es la revisión de código fue revelador.

“Incorrect vs. Correct” flickr photo by ebenimeli https://flickr.com/photos/ebenimeli/6833633182 shared under a Creative Commons (BY-NC-SA) license

Recién me doy cuenta de que mi analogía va todavía más allá de lo que pensaba inicialmente. Al principio comparé la revisión de código con los quehaceres del hogar porque son cosas que, si bien no son precisamente entretenidas y pueden dar mucha pereza, son sumamente imprescindibles para una mejor calidad de vida a largo plazo. La parte que no relacioné originalmente es que hay mucha gente que ya da por hecho ambas cosas, ya son parte de una rutina. No es necesario decir explícitamente lo que se tiene hacer porque muchos ya estamos acostumbrados a realizar ambas actividades.

Incluso si, como acabo de decir, ya acostumbramos a realizar alguna de estas labores, no quiere decir que lo estemos haciendo de la forma más eficiente. Los métodos y consejos en mi publicación original son algunos ejemplos de cómo puede mejorarse el proceso.

Las técnicas que reuní iban desde cosas muy casuales/informales como simplemente ir hasta donde tu compañero y pedirle que le eche un vistazo a tu código, hasta crear fichas para hacer un reporte completo y llevar un registro más riguroso de los cambios hechos.

Verificación y validación

Este tema fue sencillo de explicar, pues en realidad todo dependía de un par de conceptos bastantes simples en sí. Ojo, son conceptos simples por sí mismos, pero debido a su pronunciación tan similar pueden confundirse. La “comparación” de ese tema en realidad fue un par de ejemplos de palabras que de por sí se confunden entre sí, y la verdad es que muchas de esas las menciona el profesor constantemente, así que no siento que haya sido de mis mejores analogías.

“OK Blue” flickr photo by sylvar https://flickr.com/photos/sylvar/3175705552 shared under a Creative Commons (BY) license

El proceso de verificación y validación es uno comúnmente utilizado no solo en el ámbito del software, sino en todo lo relacionado a productos y servicios. Básicamente se realiza una serie de pruebas y revisiones para comprobar que lo evaluado cumpla con ciertos estándares y cierta calidad.

La parte clave para comprender este tema, o por lo menos en mi percepción, es saber qué pregunta intenta responder cada parte del proceso. La verificación se asegura de que el producto/sistema se esté haciendo correctamente, principalmente se asocia con los requerimientos y especificaciones del diseño. Por otro lado, la validación se encarga de que se esté trabajando en el producto correcto, trabaja con las necesidades del usuario y lo que se busca solucionar.

Otros detalles relacionados a cómo llevar a cabo este proceso y algunos de los beneficios que trae consigo están dentro de mi entrada original. Este tema no lo he puesto en práctica, pero pude notar que llega a ser de gran utilidad.

Pruebas en Orientado a Objetos

Lo primero que quiero decir de esto es que en el título de mi mastery escribí “OOkay”, pero no fue un error de dedo, quise meter la parte de Orientado a Objetos y lo mejor que se me ocurrió fue eso.

“testing” flickr photo by rjacklin1975 https://flickr.com/photos/rjacklin/436491997 shared under a Creative Commons (BY) license

Este tema fue muy directo, es justo lo que dice el título. Todo de lo que hablé fue de cómo se pueden realizar, sistematizar o de cierta manera automatizar las pruebas en algún lenguaje orientado a objetos.

Debido a que tanto la fase de pruebas en el desarrollo de software como el paradigma orientado a objetos tienen una fuerte presencia en el campo de computación llegué a encontrar mil y un formas de llevar a cabo estas pruebas. Comparé esta diversidad con los sistemas de numeración, pues sin importar su clasificación o funcionamiento, al final sirven para lo mismo. Lo que es importante tener en mente es que en determinadas situaciones puede ser que una manera de manejarlo sea mejor que otras.

En fin, existen muchas maneras de clasificar estas pruebas y cada una tiene sus distintos objetivos. En la página original están incluidas y explicadas varias de estas. Opino que es importante comprender algunos tipos diferentes de pruebas en caso de que uno u otro sea necesario.

Pero fuera de estos temas y de explicaciones similares, he llegado a la esperada reflexión. Este semestre he estado publicando muchas cosas respecto a varios temas relacionados al diseño de software. Si soy sincero, no creo que la parte de diseño sea una de mis fortalezas dentro de la carrera, pero debo admitir que este curso me hizo darme cuenta de lo mucho que ya empleo algunas prácticas de esta rama.

Conforme necesitaba escribir alguna entrada nueva me ponía a investigar exhaustivamente para encontrar información suficiente para hacer mi trabajo, buscaba distintas fuentes que, si bien no siempre hacía referencia a ellas, me servían para confirmar lo que leía inicialmente. Estoy muy de acuerdo con lo que Ken nos ha dicho ya un par de veces: es mejor si no se nos indica qué incluir, cuánto escribir ni de dónde obtener la información.

Me agradaba que para cada tema tuviese la oportunidad de informarme cuanto quisiese y poder ser selectivo respecto a qué poner en el producto final. Tras investigar lo suficiente podía darme una idea de qué podría ser más relevante en mi contexto; gracias a la extensión variable me decidí a imponerme un mínimo de dos cuartillas de texto por cada entrada y me forzaba a investigar más cuando no cumplía esa “cuota”; lo de obtener nuestras propias fuentes combinado con lo anterior hizo que comenzara a buscar en foros y sitios similares algunas opiniones de gente experimentada con los temas que iban surgiendo.

“Research” flickr photo by haynie.thomas36 https://flickr.com/photos/132832534@N03/18344328294 shared under a Creative Commons (BY) license

El conjunto de estas condiciones me ayudó a realmente comprender cómo funcionan numerosos procesos del diseño de software, es la primera vez en que realmente he sentido que el autoaprendizaje ha funcionado. Incluso si en alguna de las entradas que hice malinterpreté un tema o solo rasqué la superficie de lo que realmente involucra, tengo una noción de todo lo visto, y eso lo agradezco enormemente.

También relacionado con la parte de autoaprendizaje, el hecho de poder hacer estos cuando quisiéramos y las deadlines flexibles me ayudaron mucho a lo largo del semestre. Pude enfocarme en cosas más urgentes sin preocuparme por este blog. La manera de funcionar del curso me hizo reforzar mi responsabilidad y mi capacidad de búsqueda de información.

Una cosa más que quiero mencionar sobre este curso es el proyecto que estuvimos llevando a cabo a lo largo del semestre. Si bien ya di una conclusión bastante concisa de lo que me quedó en esta publicación, me gustaría agregar que además del típico trabajo en equipo (que, por cierto, se desarrolló muy bien) pudimos experimentar la parte de entrevistas y búsqueda/comunicación con los stakeholders de nuestro proyecto. Experiencias así siempre son buenas.

A pesar de haber explicado todo esto estoy seguro de que todavía se preguntan una cosa: ¿realmente funciona lo de Comic Sans? Me apena decirlo, pero no me ha funcionado en lo absoluto, mis ideas han fluido igual que siempre y en realidad no sentí ninguna diferencia fuera de lo feo que se ve mi borrador. Seguro sí es un simple efecto placebo, pero probablemente por ser consciente de eso mismo ya no funcionó conmigo. Lo único bueno es que parece ser que tanto Comic Sans como Arial (la fuente que suelo utilizar en mis borradores) tienen un tamaño similar y no tuve problemas para medir cuánto he estado escribiendo.

En fin, lo que aprendí en parciales pasados puede verse en las reflexiones del primer y segundo parcial. Si volviese a decir lo que me quedó, sería básicamente lo mismo que ya está escrito y no aportaría nada de verdad a esta entrada. El aprendizaje puede funcionar de muchas maneras, y siento que ya he plasmado en esta entrada todo lo que me pareció relevante. Me pareció muy agradable y útil la manera de llevar la clase.

“Learn” flickr photo by PlusLexia.com https://flickr.com/photos/153278281@N07/39570228664 shared under a Creative Commons (BY) license

TL; DR: El aprendizaje es maravilloso

PD: No sé por qué hay tantas fotos de Scrabble

Mi segunda reflexión parcial


¡Hola! Sé que es raro que esté escribiendo en español, pero tengo varios motivos muy buenos para esto. Primero, por ser mi lengua natal, podré expresarme con más facilidad y podré dar a entender cosas que tal vez en inglés no sabría cómo; lo que me lleva a mi siguiente punto: para la reflexión del primer parcial la mayor parte del contenido fue una breve recapitulación de los masteries que había hecho hasta ese entonces, pero en retrospectiva siento que no aportaba la gran cosa. En español podré volver a hablar sobre estos temas, pero con una perspectiva diferente, pues claro, en dos idiomas distintos explicaré las cosas de manera distinta. Por último, y definitivamente no la razón principal, porque me es muchísimo más fácil escribir y escribir cosas en español, lo que me viene muy bien para esta entrega antes de que llegue la deadline final.

“Doing the bird dance.” flickr photo by Bernard Spragg https://flickr.com/photos/volvob12b/9418456505 shared into the public domain using (CC0)

Ahora bien, dejando todos estos detalles de lado, me gustaría copiar un poco el formato que usé para mi primera entrega y explicar qué fue lo más relevante que me ocurrió en el parcial respecto a mis clases en general.
Primero, los proyectos y exámenes comenzaron a ser más pesados, me cansaba más y necesitaba esforzarme bastante para poder cumplir con todos mis deberes. El problema aquí fue que para mis masteries no tenía una deadline tan ajustada, por lo que comencé a descuidar esa parte un poco. Sí redactaba alguno de vez en cuando, pero no los llegué a publicar en su momento pues no lo vi necesario. Por cierto, a partir de mi última entrega del primer parcial creé un documento en Google Docs en donde he estado trabajando en todas estas entradas de blog. Me ha sido muy útil tenerlo todo junto de este modo.

Una cosa de la que apenas me di cuenta conforme escribía el párrafo anterior es que había una prueba que quería realizar para escribir mis masteries, pero creo que ya es un poco tarde para eso. Hace un par de semanas leí el encabezado de un artículo que decía que, cuando estás en blanco y no se te ocurre qué escribir o redactar mientras trabajas en cualquier tipo de texto, cambiar la fuente de tu escrito a Comic Sans resulta muy útil para que las ideas fluyan. Esto probablemente sea un simple efecto placebo, pero de todos modos moría de ganas por intentar algo así. Supongo que para la reflexión final tendré que ponerlo a prueba.

Una última cosa que quiero comentar antes de comenzar a hablar de los masteries de este parcial está relacionada con mis fuentes de información. Una cosa que he hecho desde siempre cuando necesito investigar algún tema para tareas o proyectos es buscar dicho tema en Wikipedia para darme una idea general de qué es lo que estoy buscando y ser más selectivo con mi información, pero como dicho sitio siempre ha sido fuertemente desprestigiado por muchos maestros procuro no incluir páginas de Wikipedia en mis referencias. Últimamente me he percatado de que Wikipedia realmente tiene información muy completa, está bien referenciada y siempre están las versiones en otros idiomas para ver todavía más información al respecto. Incluso para este blog he sacado mucha información de Wikipedia, pero no agrego los hipervínculos respectivos. Comenzaré a hacerlo porque empiezo a ver que Wikipedia es una herramienta que también vale la pena consultar.

“Magnify!” flickr photo by robad0b https://flickr.com/photos/robadob/523367560 shared under a Creative Commons (BY-SA) license

Por último, también relacionado a mis referencias, para un par de masteries obtuve datos e información de distintos foros y páginas de pregunta-respuesta. Sé que no siempre es lo ideal, pero para algunos de los temas que necesité ver en este parcial solo conseguía datos y hechos, pero en muchas ocasiones necesitaba de algún punto de vista ajeno para ampliar mi panorama y conocer un poco más, porque al fin y al cabo, la experiencia es una de las mejores fuentes de conocimiento que hay. Quise tomar varias perspectivas para generar la mía propia.

Bueno, dicho todo esto, repasemos los masteries:

Patrones de diseño

Este tema en particular es uno que conocía ya desde hace un año gracias a la clase de Ingeniería de Software. De hecho, me tocó hablar sobre él, pero si soy completamente sincero, en ese entonces no le entendía del todo. ¡Pero eso cambió! Ahora tengo una visión más clara de los patrones de diseño y de su utilidad.

“pattern” flickr photo by walmarc04 https://flickr.com/photos/ioachimphotos/29693349040 shared into the public domain using (PDM)

Estos patrones de diseño son herramientas que guían a los desarrolladores a darle una mejor implementación a las soluciones de sus problemas. Son respuestas ya estudiadas y que son consideradas como mejores prácticas por alguna razón. Los creadores de este concepto (y de 23 de los patrones) son conocidos como Gang of Four. Ellos también decidieron la clasificación de los patrones según su funcionamiento en creacionales (controlan creación e instanciación de objetos), estructurales (definen relaciones entre objetos) y de comportamiento (establece cómo se comunican e interactúan dos o más objetos).

La comparación que decidí hacer con este tema fue la de intentar bloquear la luz del Sol. Hay quienes lo hacen con la mano, hay quienes lo hacen con alguna gorra o sombrero, hay quienes usan lentes para ello, pero todos están colocando algo que se interponga entre los ojos de uno y la luz solar. Los patrones de diseño funcionan de manera similar en el sentido de que no te dicen cómo conseguir tu objetivo, sino que solo te indican qué es lo que debes hacer.

Este tema es básico dentro del diseño de software, así que tiene mucho sentido que lo veamos en esta materia.

UML I y II

UML también es de esas cosas que conocemos en los primeros semestres de la carrera. Es una herramienta que hemos utilizado una y otra vez y aún así parece que todos estamos muy lejos de aprender todo lo que hay por saber al respecto.

En los masteries de UML el enfoque principal era explicar algunos de los diagramas manejados. Una de las cosas que logré aprender mientras investigaba este tema fue que la clasificación de los diagramas UML depende en una buena parte del componente primario que predomine.

“All You Need is Tools, Tools, Tools” flickr photo by cogdogblog https://flickr.com/photos/cogdog/49021347718 shared into the public domain using (CC0)

Aquí resumo muy brevemente los diagramas de los que llegué a hablar en sus respectivos masteries:

Diagrama secuencial: Su componente principal es la secuencia y se encarga de manejar interacciones entre entidades para llegar a una meta.

Diagrama de clase: Siento que ni siquiera necesito explicar este de nuevo.

Diagrama de objeto: Algo obsoleto, pero se encarga de manejar las instancias que surgen a partir de distintas clases.

Diagrama de paquetes: Agrupan varios elementos en uno. Un paquete, vaya.

Diagrama de estado: Parecido a un autómata, pues define estados y sus transiciones.

Diagrama de componente: Parecido en cierto sentido a los bloques de programación o a módulos de acción. Pueden intercambiarse y modificarse independientemente.

También hablé de GRASP y sus principios, pero la verdad es que el modelo que quiero comentar es el MVC. Verán, al principio no comprendía muy bien cómo funcionaba este modelo, pero justo hace un par de días estaba trabajando en un proyecto de Java donde precisamente este modelo fue el que me hizo la vida imposible. Estaba trabajando con Threads y buscaba dibujar sus actualizaciones con JTable, pero debido a la velocidad de los hilos y al funcionamiento del MVC, una excepción se arrojaba constantemente. La razón era que se modificaba el modelo, pero la vista no era notificada a tiempo antes de repintar, así que buscaba elementos inexistentes en el modelo.

En fin, UML es algo muy útil para el diseño y todo lo que implica ya lo digo en mi entrada original, así que les recomiendo mejor ir a ver esa.

Clases a tablas

Este tema fue uno de los que más se me facilitó entender y redactar. Las bases de datos pueden implementarse de muchas maneras, y con temas tan estudiados como lo son el paradigma orientado a objetos y las bases de datos relacionales, era claro que habría maneras muy sistematizadas de hacer la conversión de uno a otro. 

“Old ship planks” flickr photo by zsolt.palatinus https://flickr.com/photos/137424368@N06/26782878937 shared into the public domain using (PDM)

Esa parte fue muy sencilla de explicar y en la página original incluí un tutorial muy completo sobre cómo llevar a cabo ese proceso.

Afortunadamente, este semestre he estado llevando la materia de bases de datos avanzadas, en la que trabajamos con múltiples bases de datos no relacionales. C* y MongoDB son los mejores ejemplos que tengo para demostrar lo diferentes que son a los que siguen SQL. Gracias a esto, me fue un poco más sencillo entender la complicación de pasar clases a tablas no relacionales, la manera en que estas últimas se manejan y el hecho de que no siempre existen uniones hacen la translación una tarea difícil.

Mi comparación de este mastery me pareció muy apropiada. Pasar de texto a texto siempre es muy sencillo, pero cuando el formato destino va a cambiar radicalmente, muy probablemente se vaya a requerir de un esfuerzo mucho mayor para lograr un resultado de calidad comparable. Es justo lo que pasa con las conversiones contempladas en este tema.

Clases a código

Para el último tema comprendido dentro del segundo parcial utilicé una analogía con pintura y las distintas técnicas y estilos que existen. Me enfoqué en explicar algunas de las diferencias fundamentales entre algunos lenguajes de programación. 

“Classroom” flickr photo by cogdogblog https://flickr.com/photos/cogdog/46702870 shared into the public domain using (CC0)

(Qué curioso el nombre del autor de esta foto)

También hablé sobre los distintos paradigmas que son compatibles o no con ciertos lenguajes y qué implicaciones trae esto consigo al momento de querer hacer ciertas implementaciones.

Hablé sobre cómo es muy fácil implementar un diseño orientado a objetos en un lenguaje que soporte este paradigma, así que de esa parte no incluí mucho. La parte más extensa y que requirió de un poco más de investigación fue la de diseño orientado a objetos en un lenguaje no orientado a objetos. Claro que la conversión no sería tan sencilla, pero de acuerdo a lo que leí (porque nunca he necesitado hacer algo por el estilo por mi cuenta), es posible simular de cierta manera el comportamiento de las clases e instancias, pero que siempre habrá algunos detalles que simplemente no se podrán comparar con un lenguaje diseñado para OO.

Sigo sin entender por qué alguien haría algo así, la verdad.

Pero bueno, fuera de todos los temas vistos en el parcial y de todo el estrés que ha causado la escuela puedo comenzar a relajarme un poco más. La conclusión final del curso, mis opiniones y demás cosas por el estilo quiero incluirlas en mi reflexión final, es lo único que me falta para el blog y ya tengo pensadas algunas ideas.

Así que, por ahora, lo único que me queda por decir es que estoy muy feliz porque ya van a empezar las vacaciones. Daré un gran esfuerzo por escribir otra reflexión que se merezca aparecer en el Twitter de Ken Bauer.

“twitter” flickr photo by o.tacke https://flickr.com/photos/otacke/13970870674 shared into the public domain using (CC0)

Hasta la próxima.

ML ; NL: Lo chido viene en la reflexión final

It’s OOkay to fail these


If you are involved in the computing/software areas, you’ve definitely heard about the binary system. You probably also know that there are infinite ways of representing every number by changing the base of the representation. Binary, Octal, Decimal and Hexadecimal are some of the most common representations for numbers nowadays, each has their own advantages depending on what they are going to be used for. Either way, they are all different ways of doing the same kind of representation and can be used to perform the same operations.

“Numbers” flickr photo by cogdogblog https://flickr.com/photos/cogdog/2932900735 shared into the public domain using (CC0)

As I’ve been doing since I started this blog, I’ll be comparing the first paragraph to an OO topic. This time, I’ll explain the testing performed in OO languages and, as always, my focus will be Java. For the rest of this entry I’m assuming you already know basic concepts like classes or objects.

As I was doing my research for this entry I stumbled upon many different approaches to object oriented testing, and since I really don’t know which one’s best, I wanted to include all three of them. That’s why I mentioned the different numerical representations in the beginning: there are multiple ways of testing and all of them are valid. Of course, some are more detailed or strict than others, but they all serve the same purpose.

First, according to Minigranth, object oriented testing can be classified into three different categories depending on how extensive the actual tests are.

The first category is called Class testing, but is also known as Unit testing (which is actually the name I see the most). It consists in testing individual classes to look out for errors or bugs that help us decide whether our classes were implemented as they were designed or not.

The second type of test is Inter-class testing or Subsystem testing. This one is pretty easy to understand, since it only tests compatibility between modules or classes and makes sure everything works as it should.

Finally, there’s System testing. Which tests all of the classes as a whole unit and makes sure both functional and non-functional requirements are met.

This seems like an interesting way of testing, since it’s incremental and hence, more intuitive.

Second, Ambysoft has included a diagram that corresponds to each one of the steps of the Full Life Cycle Object-Oriented Testing Method (FLOOT). Unlike the previous classification, this approach is more like a series of steps that must be followed and repeated constantly. There are six different stages included in this cycle:

Requirements testing, analysis testing, architecture/design testing, code testing, system testing and user testing.

Each one of these steps involves making use of certain techniques to complete them. I’ll briefly explain the most common ones.

Model review. An inspection, which can be as informal or strict as one wishes.

Prototype review. Testers pretend to be in real situations and use the user cases to get through them.

Black-box testing. Verifies that certain inputs actually give us the expected outputs.

This one provides us with many tools for each of the steps, which it also gives. There are many options, so this can be useful when not sure of what to do. Since I didn’t list nearly as many techniques as in the original website, I’d suggest checking them out, because the information is pretty vast.

Finally, there’s another way of classifying testing methods for the OOP. This one was provided by EComputerNotes and revolves around a similar idea to the first one and is also classified into three different categories.

State-based testing. Verifies whether the methods of a class are interacting properly with each other. Finite-state machines are often used for this, since they allow to represent different states and their transitions. Also that’s where the name comes from.

Fault-based testing. Determines a set of plausible faults and detects the possibility of bugs in the code. The testing is mainly done to find these errors in the chosen implementation of the program. The effectiveness of this test depends heavily on the ability of the tester to detect possible bugs.

Scenario-based testing. Detects errors that are caused due to incorrect specifications and improper interactions among various segments of software. This is a common approach, since it’s really easy to think of how the system/program is supposed to be working, so coming up with the cases that lead to those situations is very easy too.

This last classification of testing focuses on more technical aspects, so these techniques may be more useful for formal reports, although the practicality of these methods could make it so they’re used for informal testing too.

Whatever classification you choose to follow, the important thing is to test your software. I’ve never seen a single person writing code and not testing it immediately afterwards. This is a crucial step in every sense, and I believe that’s also the reason for the high number of different methods, techniques and types of testing that exist. And that’s only considering the OOP! That’s really fascinating to me.

TL; DR: Just test your code, please.

Veridation & Valification


There are some words that many people get confused. Something effective successfully achieves its intention and something efficient does it with maximum productivity. If you affect something, then you’ve had an effect on it. Some people don’t know the differences between these pairs of words and don’t want to, their brains must have a problem if they’re thinking like that. 

“dsc04080.jpg” flickr photo by mlinksva https://flickr.com/photos/mlinksva/2754456925 shared into the public domain using (CC0)

I didn’t know this, but “verification and validation” (also known as “independent verification and validation”) is a common process in which a service, product or system is accepted after it has met all of its requirements and specifications. The same applies for software, but obviously, the implications are different, and for this entry I will try and explain what are the particularities of this process when applied to software, which is sometimes called software quality control.

I mentioned the similar words in the first paragraph because, even if verification and validation may sound similar, they’re not the same. I decided to include the other name for verification and validation that adds independent at the beginning because it’s there for a reason. It’s important to tell the difference between them before we even begin with the process itself, since they try to achieve different things. This website already provides us with definitions for both of this words AND with important points that each of them need for the V&V process.

Verification. Its objective is to ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements. To know if we’ve fulfilled the objective we ask the question “Are we building the product right?”.

Validation. Its objective is to ensure that the product actually meets the user’s needs and that the specifications were correct in the first place. In other words, to demonstrate that the product fulfills its intended use when placed in its intended environment. To know if we’ve fulfilled the objective we ask the question “Are we building the right product?”.

Knowing these definitions may help you realize that there can be one without the other, but you must have in mind that both validation and verification are needed to ensure that a product passes quality control.

There are some activities that are usually related to the V&V process. I actually only found a few, but they seem to represent the majority of the whole thing. The one that seems more representative of the whole objective and, therefore, helps the most as an activity is testing.

Testing is often performed with test cases. In case you aren’t familiar with the, test cases are basically the expected outcomes for specific situations when using some application; they may include a detailed description of what steps to follow in order to get to certain point where the actual testing takes place. This tool is very useful, if done correctly it could actually help with both parts of the V&V process.

Other tools for the verification part may include reviews, walkthroughs and inspections, and they all can have different levels of rigor depending on how serious or formal the V&V process is.

The main obstacle for many organizations is how overwhelming it can be for them to correctly perform a strict verification and validation process. In reality, V&V can be somewhat simplified to be better understood, and this site has come up with five general steps that may be follow to meet basic requirements and still achieve great quality. Obviously the process isn’t THAT simple, but these steps are only intended to serve as a basic guide to at least know what’s supposed to be done through the process:

1. Create the validation plan. Helps identify the responsibilities for each of the participants and defines what has to be achieved, what criteria has to be met, what tools are to be used, and other similar things.

2. Define system requirements. Define what the system is supposed to do and classify all of it depending on what is specified in each requirement: resources, users, security, etc.

3. Create a validation protocol and test specifications. Create all of the tests to be performed in the next step as complete and detailed as possible.

4. Testing. Testing (do I even have to explain?)

5. Develop/revise procedures and final report. A final validation report is produced, reviewed, and approved. Its approval means the system is ready to be released.

In case the previous steps weren’t exactly what you expected, there are more possible interpretations! For instance, this webpage also lists some steps that may help with the V&V process, but this one has ten and rather than stating the activities that should be performed, it describes what kind of document or protocol should be completed by the end of each one. I won’t list any of that in this entry, but I definitely recommend checking it out.

This is yet another way of improving your software products by following certain steps and protocols and all of that stuff. For me, this process in particular can be very helpful for some people since it aims to answer to very clear and simple questions. That may be more than enough to make developers understand what the actual goal is.

TL; DR: Verification and validation are different things.

*Code* 👏 *review* 👏


Responsibilities and chores. Maybe just reading these words is enough to tire some people out, but we all understand that whether they’re exhausting or not, whether they’re many or just a few, whether you have to do them yourself or not, they’re required for us to have good/healthy lives. It’s the price to pay for a better outcome in the long-term.

“Bond cleaning” flickr photo by Elaine_Smith https://flickr.com/photos/155416046@N05/35473335420 shared into the public domain using (CC0)

As programmers and humans, we are bound to make some mistakes every now and then. We can prevent many of them thanks to the design phase of development, but writing the code itself is a different story. The way we implement certain functions, the way we manage our resources, the algorithms we come up with, among many other things, will not always be the most efficient. It’s important to periodically go through our code to look for any mistake or to find some details that would need to be changed, and that’s (kind of) what code review is all about. Just like chores, this process is fundamental to achieve better results in the long-term.

Code review is an activity in which one or more persons go through a programmer’s source code looking for said mistakes and details. Formally, the author of the code shouldn’t go through their own code by themselves, since they may overlook certain parts from the code or simply not be aware of an error in it. This “rule” has led to the creation of many techniques or methodologies for a better analysis of the code, which along with systematic processes and specialized software/tools has created many efficient ways for us to get our code revised.

Some of these tools help us checking our code automatically, but I won’t be focusing on those, since they are better suited for what’s called static program analysis. Instead, I want to quickly explain what kind of tools are used for this type of activity. Online software repositories, like Git, are useful since many developers can regularly check for changes, revert them or even write themselves, very useful for code revision. Ticket systems, like Redmine, allow people to collectively review code. There are even more specialized tools for this particular activity, you can take a look at some of them and their particularities in this link.

These tools can be used for multiple different methodologies. Each of them has a different approach as to how the code review is to be performed and, obviously, they require different things. I’ll provide a brief explanation for some of them.

Email Thread. This one is really simple. The main idea is to send a file via email with the code you want reviewed to a colleague or a supervisor. Their workflow will eventually let them download/open the sent file and resend it with the appropriate corrections. This method is simple and flexible, but having to constantly download files and putting all corrections together may be a little more exhausting than it should. This problem could be solved by using a repository like Git, but that wouldn’t be an email thread anymore.

Pair programming. Definitely one of the more popular approaches. This one consists of two people working on the same code at the same time while checking each other’s progress. This combines the programming and its revision, and can be really useful, but I feel like it presents the same risks as if the coding were being done individually, since both people could overlook some stuff.

Over the shoulder. This one is also very simple and very easy to perform. Similar to the email thread, once you finish coding you have to get someone to get your code revised, but with this method you have to walk them through the code and explain why you did it the way you did. Generally this one’s done with people in your same workspace (so no downloading is necessary) and makes it a more informal approach. This one doesn’t need documentation, but a way of keeping track of corrections and changes is very recommended.

Likewise, there are tons and tons of recommended practices for code revision. The ones I’ll mention come from this website in case you want to check it out!

Review between 200 and 400 lines of code at a time. The brain’s effectiveness to process information and detect errors in code can diminish after approximately reading 400 lines of code, at least according to a study by SmartBear.

Don’t review more than 500 lines of code per hour. In a similar way to last recommendation, this one comes from the way we process information. If we do it to quickly, then we are probably not doing it correctly.

Don’t review for more than 60 minutes at a time. Again, since our effectiveness diminishes as time passes, this recommendations intends to always keep us at our best when reviewing.

Now, the whole point of code revision is, of course, to improve the quality of our code. It creates defect-free, well-documented software; it makes our program compile with enterprise coding standards and serves to teach and share knowledge between developers. When done correctly, peer reviews save time, reduces the required amount of work and can even save money. I can’t stress enough how useful it’s to do code review in the long-term.

TL; DR: Try code revision!!!

Class ⇒ Code (Charlie Lima Alpha Sierra Sierra)


Are you fond of painting? There are multiple types of painting styles and techniques, each with their own characteristics. The same picture can be represented in multiple styles, and although the final product will be “the same”, the techniques inherent to each of those styles will make each of the versions distinct from each other. Some may evoke different feelings, some may highlight certain parts more than others, some may look different because of the physical properties of the paint. And, of course, some techniques may be better suited for certain goals than others.

“paint” flickr photo by Burntwood90 https://flickr.com/photos/66355109@N00/32410795875 shared into the public domain using (CC0)

Once again, the topic for this entry will be class design. It’s actually pretty fun to come up with new analogies for the same thing over and over. And one more time, I’ll be taking UML class diagrams as my main reference for this post, so have that in mind.

As I’ve already stated in my modelling languages post, class diagrams are a way of graphically representing an Object-Oriented structure; they display behaviors and relations among classes, which is particularly useful for the design stage of development. After having designed all of our classes the ideal thing is to plan on how the system is going to be implemented, and that’s when we face a new issue: what language are we supposed to use for said implementation?

Just like painting and the different techniques that exist for it, programming has a similar relation with different programming languages. If we want to develop an application, there are tons of languages which we could use and objectively archive the program’s purpose. However, different languages have different characteristics, some may be more efficient regarding resources like time or memory, others could have functions that are better implemented for what is needed, others can simply be preferred by certain developers. Regardless, it’s important to determine what language would be more appropriate for some situations.

Let’s get the obvious out of the way. Classes by themselves are fundamental parts of object-oriented programming, so it’s much easier to implement class diagrams with a language that supports the object-oriented programming. The thing is that, even among this type of languages, we still have multiple options. Different object-oriented languages have different degrees of object orientation and, therefore, can adapt to different necessities. A few distinctions derived from these differences are the following:

  • “Pure” OOL, where everything is treated as an object. (Python, SmallTalk,…)
  • Languages designed for OOP that still have some procedural elements (Java, C++,…)
  • Procedural languages with some OO elements (PHP, MATLAB,…)

Regardless of which subclassification of OOL is chosen, the process of implementation will most likely be very easy. Class diagrams already define classes, methods and attributes, so translating that into any of the languages above is a simple task.

One thing that I’d like to point out is that a lot of the Object-Oriented Languages aren’t actually Object-Oriented exclusively. Many of them (most notably, the more popular ones, like Java, Python and C++) are actually multi-paradigm languages. This means that they support multiple programming paradigms and, that if OO is one of them, then they can implement class diagrams easily but also, even if not completely related to the class ⇒ code topic, adapt many other functionalities available from other paradigms.

Now, if we wanted to implement class diagrams (or any kind of OOP design for that matter) in a non-OOL it would not be an easy task. Practically every feature shown in OOP diagrams is proper to the object-oriented paradigm, so there wouldn’t be any support for the implementation of such diagrams as they’d be. There are ways in C, for instance, to manipulate and kind of achieve similar behaviors to those of OOP, but I’d rather recommend modifying diagrams to better fit the paradigm that’s available in certain languages, it’s probably easier and would take less time to complete.

Whether we decide to implement a diagram in an OOL or a non-OOL, design is key for a better outcome. There’s a reason for artists to do sketches, right? Deciding on what language to write code in is also a very important step for the development process, I’d say that sometimes the differences between one language or another are not that significant, but when specific necessities must be fulfilled, it’s the developer’s duty to correctly determine the best option for the code.

TL; DR: Don’t implement Object-Oriented Designs in a non-Object-Oriented Language.

Also, most of the information I got for this entry actually came from forums or opinions on different websites like this or this. I justify that decision in the second partial reflection!

Class > Tlass > Taass > Tabss > Tabls > Table


Have you ever written a summary? Whether it was of a chapter from a literature book or of one of your class’s lessons, all you had to do was to read whatever you needed to, identify the main ideas and their most important complements and turn all of that into a new transcript. It’s simple, really. All you are doing is turning plain text into less plain text, but, what if that wasn’t the case? What if you wanted to change the source’s format and following a couple steps wasn’t enough?

Let’s say you just read a chapter from one of your school’s books and want to make a mind map with its information. Only identifying main ideas wouldn’t cut it, you would also need to classify them, connect them and be able to express ideas in fewer words. This isn’t as simple as a summary, and the same happens with timelines, diagrams, other kinds of maps and many other similar representations. The methods have essentially the same purpose, but extra or different steps may be needed to achieve it.

I’ve talked about class diagrams before and how they’re used in OOP to denote object classes, their attributes and methods; what their components are and why they can be useful. This time I will talk about one of the processes in which class diagrams get to be implemented, more specifically: converting class diagrams (or similar) to database tables.

Lots of tables in a class (get it?):

“Orderly Structure” flickr photo by cogdogblog https://flickr.com/photos/cogdog/2555287254 shared into the public domain using (CC0)

If you’ve ever worked with databases then you must know that they can be classified into relational and non-relational. In case you haven’t, I’ll explain the key (hehe) difference regarding class conversions: relational databases, which mostly use SQL for queries, store data in tables with both rows and columns; while non-relational databases change their way of representation depending on what type of data is supported.

Since relational databases’ tables always have the same structure (rows and columns), the way of mapping class representations into these tabulations is easier. This is also the reason why there are many tutorials on how to do this kind of translation. The process can be compared to the summaries I mentioned in the first paragraph of this entry. There’s not much to think about, you only need to identify certain patterns or elements in one representation and apply the appropriate method to make it work for the other. 

I found a very complete tutorial that even includes an example for how to follow the conversion process. I will now list the steps it includes and let you visit the website in case you are interested:

  1. Replace the standard identifier stereotype stid by pkey.
  2. Replace the platform-independent data type names by their SQL equivalents.
  3. Eliminate multi-valued attributes.
  4. Turn any enumeration data type into a corresponding table.
  5. Eliminate generalization relationships.
  6. Eliminate associations.
  7. Define an index for attributes that serve as a search field.

Yes, only seven steps are enough for this implementation of a class. But remember, they are designed to work for UML class diagrams, so other class representations may need some adjustments. Regardless, this is still simple.

On the other hand, classes can also be implemented in non-relational databases. The problem is that, since the structures used to store data may vary greatly there’s no standard method. Also, since non-relational databases are not as common nor studied, this kind of process may be even less common. This is similar to the example I gave with mind maps.

As I was doing research for this I found two websites, each explaining a different process to map class diagrams to non-relational databases. Now you may be thinking: Oh, well, if people already figured out how to map that, then the problem must’ve been solved, right?. But the thing is, they explain different processes because they focus on different non-relational structures. One does it for graphs, while the other, for documents. The fact that they only work for certain formats, along with the information included in these websites leads me to think that this kind of mapping can be more complicated than it seems. Also, both processes start from UML diagrams, so they’re not even complete in that regard. Regardless, they seem to work fine for what they claim, so that’s something, at least.

A funny thing happened as I finished writing this entry. I realized that the comparisons I made between mapping and summaries were actually present when redacting some of the paragraph in this post. Let me explain: since there’s so much information about relational databases and the conversion from classes to tables it was easy figuring out what to write. I just had to take the main ideas from any page I found: a summary. But it was different for the non-relational part, there was definitely enough information for me to use, sure, but I had to take different pieces from several websites so I could understand, reinterpret and write about it: somewhat similar to the example with mind maps.

Mapping to non-relational databases may be complicated, but not impossible. Using relational databases is the easier solution, though. Besides, SQL has by far more documentation, which is also a great advantage and why I’d definitely recommend using.

TL; DR: Stick to SQL.

Unexpected Metonymy Linkages II


Before I even begin with one of my weird analogies have in mind that I’m assuming that you’ve already read my previous entry, so if I don’t have much of an introduction or there’s seems to be missing information, that’s the reason.

“Crayons” flickr photo by idreamlikecrazy https://flickr.com/photos/purple-lover/5979587149 shared under a Creative Commons (BY) license

One of the things I mentioned in the first part of UML is that diagrams are classified into categories depending on its primary symbols. This time I’ll cover three other types of diagrams:

First, if the diagram contains packages it’s classified as a package diagram. Package diagrams are a subcategory of structure diagrams, and sure they are. Packages are a way of grouping different elements into one, it works (and also looks) like a folder and their use in a package diagram is to simplify and provide a better structure of the system. Packages can interact with each other in order to form a logical structure that tells how different components work together. It’s almost like package diagrams are maps of the world, and each of the packages is the (more detailed) map of a country.

When the primary symbols of a diagram are behavioral states and transitions it belongs to the state machine diagram category. This kind of diagram is pretty intuitive and rather simple, it contains all possible states our system can be in, such as active, out of order and waiting. These diagrams include what actions have to be taken in certain states in order for the system to transition into another one. They’re very very similar to automats, but are not as complex.

Component diagrams are UML diagrams used for Component-Based Development, where components interact with each other. These components act as pieces that can be replaced, swapped or rearranged when necessary. The goal of using components is to be able to develop each one of them independently from each other. These components can be either logical or physical and often have associated interfaces or ports in order to interact with other components.

There are obviously a lot more categories for UML diagrams, but there are too many for me to cover them all. Instead, I recommend checking out this site to read about UML 2.5’s considered classifications and this one to see some examples of different UML diagrams. Those helped me understand many of the diagrams in this and in the previous post. UML-diagrams.org was my main source when doing research for this entries, they have detailed explanations of every single possible component in a UML diagram, so that can also be useful at some point.

Now, I’ve already talked about design patterns and many Object Oriented Programming related subjects in other blog posts, so the next concept should be easier to grasp (hehe). GRASP stands for General Responsibility Assignment Software Patterns, and is actually the name of a set of design patterns that aid Object Oriented Design. I think the name fits perfectly, since it already perfectly describes what these patterns’ objective is. But just in case it’s not completely clear: GRASP are patterns that receive certain responsibilities regarding objects and classes in the OOP paradigm.

GRASP makes use of the following patterns and principles:

Creator – Class responsible for creating objects. Similar to the Factory design pattern.

Indirection – Assigns responsibility of mediation to an intermediate object.

Information expert – Determines where to delegate responsibilities.

High cohesion – Keeps objects appropriately focused, manageable and understandable.

Low coupling – Dictates how to assign responsibilities to support multiple aspects.

Polymorphism – Defines variation of behaviors based on types.

Protected variations – Protects elements from the variations on other elements.

Pure fabrication – A class made for the sole purpose of achieving low coupling and high cohesion.

Controller – An object responsible for receiving and handling a system event.

I mentioned controller at the end of the list because it’s actually an important component in the next pattern I want to briefly explain: MVC. Model-View-Controller is yet another design pattern. This one divides the associated program into three interdependent systems. Each of them has a specific role in the whole thing, and the reason for doing this is so the representation of information is separated from the actual manipulation of it. I will be talking a little more about this particular pattern in my second partial reflection, since something funny regarding this happened last week.

All of these patterns and models may seem like advanced stuff that we may not get to use or even see at any point, but the more time I’ve spent studying and working on different projects I’ve realized that many of these are waaay more common that I’d first thought.

Note: I was told that I could have only written about even more UML diagram classifications. I could’ve done that, but I wouldn’t have had a chance to think of any new comparisons and that’d have been boring and not challenging at all. 

TL; DR: You’ll eventually find every pattern implemented in some way.

Create your website with WordPress.com
Get started