Data science newsletters

A few colleagues asked me about newsletters on data science topics that I frequently read. Below are two sets that I have been following. Enjoy!

KDDNuggets, O’Reilly Data, and Data tau usually have short introductory topics, for instance, best practices, was stories, concepts, and tools. These articles are good for people trying to make sense of the data science field.  I quickly browse them daily and don’t spend more than 15 minutes reading.

Now, if you need to go a bit deeper or are looking for a specific topic, try Data Elixir and Data Science Weekly. These sources produce a weekly compilation of articles; each article taking 30 minutes to read.

p.s.: If you know any interesting similar newsletter or blog, please let me know. Thank you!

Conversations between Mentor and Mentee

… a nice continuation from my previous post.

Mentor and mentee conversations are the high moments of learning for both sides. These are the moments when mentor is figuring out what feedback the mentee mostly needs and how to convey it effectively. Meanwhile, the mentee is doing a lot of mental work – expressing her ideas, understanding the impact of a mentor’s question, and even trying new perspectives suggested by her mentor. Actually the mentee is learning to think, on-the-fly!

For all such actions to happen satisfactorily, mentor and mentee must develop a good conversation practice. Although, this is yet another topic on which we receive little structured training, there is very exciting material around. Take for instance an article from Rick Reis about strategies for productive conversations between mentor and mentee [1], from which I selected a few tidbits to entice your interest:

A. What are good questions to foster this type of conversation (e.g., open-ended, challenging, that tap into mentee’s experience and skills, etc.)?

B. How to listen effectively while having a conversation (e.g., acknowledging each others feelings, paying attention on what is being said, instead of focusing on preparing your next intervention, etc.)?

C. Which and how to use the various media for conversation (e.g., phone, email, live chat, etc.)?

This last item motivates my next post – “Another cautionary note about email”.

[1] Rick Reis, “1409. Strategies for Good Conversation”  – http://cgi.stanford.edu/~dept-ctl/cgi-bin/tomprof/enewsletter.php?msgno=1409

Talking and Listening Tips

Tags

I collected some tips about effective talking and listening from videos [1][2], readings [3][4], and a workshop I attended at UCI [5].

I- Best practices (from Julian Treasure [1])

A. Four virtues of speakers: honesty (be clear and straight), authenticity (be yourself), integrity (be your word), and love (wish them well).

B. Eight tools for talking: register, timbre, prosody, pace, pitch, and volume.

C. Seven sins of talking: gossip, judging, negativity, complaining, excuses, lying, and dogmatism.

II – Warm-up your voice and body

Usually involves making sounds with lips, tongue, throat, chest, arms, etc. It also involves relaxing your body. Search for videos in YouTube, there are many.

III- Posture

Your posture talks about you. Being conscious of our posture while speaking helps us to be understood and to understand others [4]. Our brain changes by the way the stand [2][3].

References:
[1] http://www.juliantreasure.com/

[2] http://www.ted.com/speakers/amy_cuddy

[3] Dana R. Carney, Amy J.C. Cuddy, and Andy J. Yap, Power Posing: Brief Nonverbal Displays Affect Neuroendocrine Levels and Risk Tolerance, in Psychological Science, Sage, 2010, DOI: 10.1177/0956797610383437
Available at: link

[4] Pierre Weil and Roland Tompakow, Notre corps parle : Le Langage silencieux de la communication non verbale, Courrier du Livre (September 5, 1989)

[5] Effective Communication Workshop at UCI (link)

What is a Theory in Software Engineering

Product of an inspiring walk around campus with my colleague Lee Martie.

 

What is a theory in Software Engineering?
Regardless of scientific field, a theory is a model used to predict and to explain phenomena. In Software engineering these types of models are respectively called by many authors (e.g., [Sca02] and [PW92]) as predictive and descriptive models.

 

How do we evaluate these models in software engineering?
A predictive model is evaluated by contrasting the expected outcomes against the actual outcomes of a software process/method. For that, outcomes must be described in a way conducive to measure and to compare results. Such description of outcomes requires a descriptive model (e.g., which set of metrics will be used to evaluate the outcome of this new software process?).

 

A descriptive model is evaluated by looking at the steps taken to produce an outcome. For instance, how can someone tell whether a requirements model is consistent with user goals? For that, one has to look at how requirements were elicited and validated against user goals. Such steps are part of a prescriptive model of goal-based requirements engineering.

 

How to evaluate if my tool has actually led me to a theory?
I believe that a “theory of a tool” is a model for a family of tools. As such it must be compared against other competing models. How to make that comparison? Karl Popper wrote that the measure of quality of a theory is how falsifiable that theory is [Pop63].  It does not mean that my model is more complete. Rather means that my model provides the theoretical or empirical means to demonstrate that under certain circumstances (e.g., usually, inputs and preconditions in a usage scenario) a certain outcome is expected (deterministically or probabilistically).
In other words. If my model claims that a set of features cause certain outcomes to happened, for models which don’t have the same features, one cannot refute these models by setting the same circumstances (inputs) and predicting the same outcomes.

 

Of the many tool features, which can actually be articulated in a model? (See picture below)
The features must be grounded in reality, thereby we can rely on empirical evidence produced by us or by previous research. The features must relate to the ontology of our family of tools. In other words, features must be intelligible to people who are acquainted to the family of tools. The importance of that is twofold. First, to guarantee that the model is understandable, therefore passive to be reified in a tool. Second, reduce the risk that the problem that the our model aims to solve was not already solved by a simpler solution (Occam Razor).

 

References:
[Sca02] Walt Scacchi. Process Models in Software Engineering, in J. Marciniak (ed.), Encyclopedia of Software Engineering, 2nd. Edition, Wiley, 993-1005, 2002
[PW92] Dewayne E. Perry and Alexander L. Wolf. Foundations for the Study of Software Architecture. ACM Software Engineering Notes 17(4):40-52, October 1992.
[Pop63] Karl Popper. Conjectures and Refutations: The Growth of Scientific Knowledge, 1963, ISBN 0-415-04318-2

 

Whiteboard-PathtoTheoryInSE

Here follows some topics I plan to post

  • System Frontier Choices based on Essence and Accident Criteria : essence as the attribute of a scientific method and not of an ontology
  • Application of Paradigmatic Classes of Life-Cycle Dependencies: advantages of discriminating systems based on the dependencies among their core entities
  • Heuristics for Quality Convergence During Severe Time-Constrained Test Phases
  • Anti-patterns of Requirements Engineering and Management
  • Tensions and Equilibrium in Software Projects
  • Generation Theory Comments on Software Teams and Education
  • Book reviews