CHAOSS Weekly Newsletter

By News

With respect to Grimoire Lab, version 0.2.22 was launched last week. This release is the first one that officially includes Graal, a generic repository analyzer. This is the first step to make the platform handle the data produced by this tool.

Read More

CHAOSS Weekly Newsletter

By News

CHAOSS is participating in the 2019 Grace Hopper Open Source Day (https://ghc.anitab.org/tag/osd/). Sean and Carter from the Augur team will lead a session on October 3, 2019. This is a great opportunity for CHAOSS to be involved in this fantastic event!

Read More

VMware Open Source

By Blog Post

Back in October, my colleague John Hawley and I reflected on our visit to last year’s U.S. CHAOSScon where we gave a talk on “The Pains and Tribulations of Finding Data.” At the end of that post, we mentioned learning more at the conference about Grimoire Lab’s Perceval tool for tracking data from multiple open source projects on a single dashboard. That opportunity helped me develop the work that was the subject of a talk I gave at the recent CHAOSScon Europe 2019.

Read More

GrimoireLab – Graal

By Blog Post

Currently, GrimoireLab allows to produce analytics with data extracted from more than 30 tools related with contributing to Open Source development such as version control systems, issue trackers and forums. Despite the large set of metrics available in GrimoireLab, none of them relies on information extracted from source code, thus limiting the end-users to benefit of a wider spectrum of software development data.

Read More

Contributing to the GMD Working Group

By Blog Post

The GMD Working Group is one of the CHAOSS working groups, tasked with defining useful metrics relevant for the analysis of software development projects from the point of view of
GMD (growth-maturity-decline). It also works in the areas of risk and value. For all of them, we’re intending to follow the same process to produce metrics, similar to what other CHAOSS working groups are doing. This post describes this process, that we have recently
completed for the first metric (many others should follow during the next weeks).

Read More

Metrics With Greater Utility: The Community Manager Use Case

By Blog Post

Community managers take a variety of perspectives, depending on where their communities are in the lifecycle of growth, maturity, and decline. This is an evolving report of what we are learning from community managers, some of whom we are working with on live experiments with a CHAOSS project prototyping software tool called Augur (http://www.github.com/CHAOSS/augur). At this point, we are paying particular focus to how community managers consume metrics and how the presentation of open source software health and sustainability metrics could make them more and in some cases less useful for doing their jobs.

Read More

Open Community Metrics and Privacy: MozFest’18 Recap

By Blog Post

Open communities lack a shared language to talk about metrics and share best practices. Metrics are aggregate information that summarise raw data into a single number, stripping away any context of data. Pedagogical metric displays are an idea for metrics that include an explanation and educates the user on how to interpret the metric. Metrics are inherently biased and can lead to discrimination. Many problems brought up during the MozFest session are worked on in the CHAOSS project.

Read More

Reflections on CHAOSScon NA 2018

By Blog Post

Previously, we’ve explored the challenge of measuring progress in open source projects and looked forward to the recent CHAOSScon meeting, held right before the North American Open Source Summit (OSS). CHAOSS, for those who may not know, is the Community Health Analytics Open Source Software project. August’s CHAOSScon marked the first time that the project had held its own, independent pre-OSS event.

Read More

‘Helpful and Useful – The Open Source Software Metrics Holy Grail’

By Blog Post

My colleague Matt Germonprez recently hit me and around 50 other people at CHAOSScon North America (2018) with this observation:

“A lot of times we get really great answers to the wrong questions.”

Matt explained this phenomena as “type III error”, an allusion to the more well known statistical phenomena of type I and type II errors. If you are trying to solve a problem or improve a situation, sometimes great answers to the wrong questions can still be useful because in all likelihood somebody is looking for the answer to that question! Or maybe it answers another curiosity you were not even thinking about. I think we should call this (Erdelez, 1997). There’s an old adage:

“Even a blind squirrel finds a nut every once in a while.”

Read More