Social Media

CHAOSS YouTube Channel


Upcoming Events

Join us at conferences, workshops and hackathons.

Open Source Summit Europe 2018 (October 22-24):

MozFest 2018 (October 26-28)

CHAOSSCon EU 2019 (February 1, 2019: Brussels, Belgium before FOSDEM)

CHAOSScon Europe 2018

News

Reflections on CHAOSScon NA 2018

By | Blog Post

By Alexandre Courouble and John Hawley

This blog post originally appeared on the VMware Open Source Blog on October 9, 2018.

Previously, we’ve explored the challenge of measuring progress in open source projects and looked forward to the recent CHAOSScon meeting, held right before the North American Open Source Summit (OSS). CHAOSS, for those who may not know, is the Community Health Analytics Open Source Software project. August’s CHAOSScon marked the first time that the project had held its own, independent pre-OSS event.

After attending the event, we thought it might be interesting to share our takeaways from the conference and reflect on where we stand with regard to the challenges that we outlined both in our previous posts and in our CHAOSScon talk, “The Pains and Tribulations of Finding Data.”

For those who weren’t able to attend CHAOSScon and would like to see our talk, it’s now available for viewing here. We started with an overview of current solutions for gaining visibility into open source data and then outlined what we view as the challenges currently standing in the way of creating solid progress metrics for open source development. We’ll go back to the talk at the end of the post, but first, our overall takeaways.

One thing we appreciated about having a dedicated CHAOSScon was that the event attracted a mix of longtime colleagues and collaborators as well as people new to the community. In particular, there was a strong presence from corporate open source teams. Engineers from Twitter, Comcast, Google and Bitergia shared how they have been tackling different kinds of open source data challenges. Hearing about their own trials and tribulations definitely seemed to validate our impression that we share a number of basic data measurement problems that are worth addressing as a community.

It was good, too, to see CHAOSS welcoming these corporate perspectives. Open source conferences often eschew that kind of engagement, but it is useful to hear how teams are solving problems for themselves out in the wild. Here’s hoping that this marks the start of a new trend.

A pair of workshops in the afternoon offered another useful takeaway. One was on “Establishing Metrics That Matter for Diversity & Inclusion” and the other was a report from the CHAOSS working group on Growth-Maturity-Metrics.

It was clear from the latter workshop that we now have a good number of quantifiable data points to establish where a project is on the growth-maturity-decline continuum. But diversity metrics present a much trickier challenge. The data there exists mostly in mailing lists and board discussions and is currently only really explored through surveys. But the issue provoked a really interesting discussion full of smart suggestions and we’re excited to see what new solutions the community will come up with in the future.

Turning to what we learned from our own panel, we were thrilled to be speaking in front of a similarly engaged audience. We opened with a shout out to the open source projects that have already created tooling around data acquisition and we were lucky enough to have maintainers from many of those projects in the room with us. It was good of our audience to indulge a presentation heavy on questions and light on answers. They seemed genuinely curious about the issues we were raising and interested in trying to figure out how to fundamentally address them – some even started working on potential solutions as we were speaking.

We didn’t arrive at any grand consensus on solutions, but it’s clear that there is active community interest in trying to at least understand the problem of open source metrics and how we might be able to solve it. That’s certainly inspiring us to keep working on the issue—after all, things will only get better as more ideas get discussed, researched, tried and retried. This is not something we expect to be magically fixed in a couple of steps, but we’re excited to keep reaching out to the colleagues we interacted with at the conference and see what develops.

Our final takeaway is a classic example of conference serendipity. We arrived there knowing about GrimoireLab, a tool for tracking data about multiple open source projects on a single dashboard—we even referenced it in our talk. But what we didn’t know is that it’s easy to create your own implementation of it. We attended a presentation where several groups shared how they had implemented GrimoireLab with success and we’re now implementing it internally ourselves to track the status of our open source projects. Talk about a win-win situation.

Stay tuned to the VMware Open Source Blog for future conference recaps and follow us on Twitter (@vmwopensource).

‘Helpful and Useful – The Open Source Software Metrics Holy Grail’

By | Blog Post

By Sean Goggins

My colleague Matt Germonprez recently hit me and around 50 other people at CHAOSScon North America (2018) with this observation:

“A lot of times we get really great answers to the wrong questions.”

Matt explained this phenomena as “type III error”, an allusion to the more well known statistical phenomena of type I and type II errors. If you are trying to solve a problem or improve a situation, sometimes great answers to the wrong questions can still be useful because in all likelihood somebody is looking for the answer to that question! Or maybe it answers another curiosity you were not even thinking about. I think we should call this (Erdelez, 1997). There’s an old adage:

“Even a blind squirrel finds a nut every once in a while.”

For open source professionals a “Blind Squirrel” is little more than the potential name for a Jazz trio, and probably not the right imagery for explaining to your boss that you’re “working on open source metrics”. Yet these blind squirrels will encounter nuts a LOT more often if we make more nuts! “Metrics are nuts!”. Not a good slogan, but that’s my metaphor. Making more metrics is easy for us because we have lots of data, we write software, and it stands to reason that more is going to generate more useful metrics. If you are the blind squirrel, its useful to find metrics.

Can you imagine all the useful things blind squirrels would find if we let them loose in an Ikea? “I came for the Swedish meatballs, I left with 2 closet organizing systems and a new kitchen”! A lot of things are useful, but in order for something to be helpful it needs to help you meet an important goal. To summarize:

Useful: Of all the different things I find in the Ikea, many of them are useful. Or, there are 75 metrics on this dashboard, and 3 of them are useful!

Helpful: You go into the endeavor with a goal, and leave with 3 metrics that help you achieve that goal. Or, you’re a blind squirrel that just ordered nuts online from Ikea.

Open Source Software Health Metrics: Lets go Crazy! Lets Get Nuts!

Great answers to the wrong questions are more commonplace than we prefer because open source software work is evolving quickly and we do not yet have a list of the right questions for many specific project situations. Lets refer to questions as “metrics” now. Questions and metrics are nuts! Still a terrible slogan. Sometimes we do not know the question-metric-nut and foraging through a forest of metrics is, if not helpful, a way to reduce the rising anxiety we feel when we are not sure what data helps to support our explanation of what is happening in a project ecosystem. So, if like me and dozens of others working in and around the CHAOSS project, you are trying to achieve a goal for your project there are two orthogonal, strategic starting points our colleague in CHAOSS, Jesus M. Gonzalez-Barahona, suggests:

  1. Goals: What are metrics going to help you accomplish?
  2. Use Cases: When you go to use metrics, what are the use cases you have? A case can be simple, ill formed and even ’unpretty’:
    1. “My manager wants to know if anyone else is working on this project?”
    2. “It seems like my community is leveling off? Is it? Or is it just so large now I cannot tell?”

Taking Action by Sharing Goals and Use Cases

Having a yard full of nuts to sort through can help you work toward the nuts you want. OK. The nut metaphor has gone too far. We are looking to use software, provided as a prototype and an example to help talk through the details of use cases you name. With you. The use cases of open source developers, foundations, community managers and others use to evaluate open source software health and sustainability metrics are probably a manageable number.

We can give you some metrics to work with quickly using the CHAOSS sponsored metrics prototyping tool Augur.

What are we trying to accomplish with metrics? With Augur? One of our goals is to make it easier for open source stakeholders to “get their bearings” on a project and understand “how things are going”. We think that’s most easily accomplished when comparisons to your own project over time, and other projects you are familiar with are readily available. Augur makes comparisons central.
Building Helpful Metrics

If you have already shared a list of repositories you are interested in with us, here’s what you have;

  1. an Augur site with those repos
  2. The opportunity to look at that site and help the whole CHAOSS community know:
    1. What use cases which particular metrics help you address
    2. What goals you have that could be met by something like Augur, but you cannot meet yet
    3. Something to hate. If you’ve ever been to an NHL game, you know that hating the other team is how we show our team we love them. Its also a good brainstorming device.

So, OK. What do you want?

We want the opportunity speak with you about your goals, use cases, and the failings of tools currently at your disposal for “getting there”. If you’re feeling adventurous, I would like to be able to reference our conversations (anonymously) in research papers, because research papers are kind of the “code of the academic world”. That’s less important.

An Augur Experiment

AUGUR

If you do not have a list of repositories you have already shared with us, there are a few examples here: http://www.augurlabs.io/live-examples/.

Design Goals

The version of Augur that’s currently deployed has several design goals that seek to provide useful information through comparison within a project (over time) and across projects. The most fundamental metrics people are interested in include

What individuals committed the most lines of code in a time period?
From what companies or other organizations are the individuals who committed the most lines of code in a time period?
Derivative of the first two: Is this changing? Did I lose anyone? Who can this project NOT afford to lose?

Projects You Care About

Figure 1 is an example from Twitter, which shows an instance of Augur configured for all of the repositories in the Twitter ecosystem. When you go to http://twitter.augurlabs.io you get the list of repositories that you see in figure 1.

Figure 1. When you follow the URL above, or your own URL, you will see a list of repositories that we have cloned, and using the technology behind “Facade”, a tool written by Brian Warner, calculated all the salient, basic, individual repository information about. Here’s a list of those repositories.

 

Looking at my projects

When I look at the most basic data for one of my repositories, I have enough information to answer the most basic questions about it (See above). Figure 2 and Figure 3 illustrate the Augur pages you will see at the next level of “drill down”. Try clicking the months for even more information! Keep in mind this is ONLY the information for the repositories you shared with us, or the repositories part of one of our other live examples.

Figure 2. You can see the lines of code from the top two authors, as well as the space inefficient Augur tool bar. Please contact me if you have tips and tricks for getting developers to be more comfortable with putting aesthetics behind utility in web page design. I will buy you a case of beer.

 

Figure 3 is a second image of the same page, but scrolled down just far enough to see that you can look at the top ten contributors as well as the top organizational contributors. We used a list of over 500 top level domains, as well as tech companies we were able to “guess” to start to resolve even these prototypes to specific companies. We did this because Amye asked us to, and we’re really gunning to make Gluster have more lustre. As if that’s possible.

Figure 3. A more detailed look at some of the information available on a repository by repository basis in Augur. We also show you the organizational affiliation information.

Explore the Rest of Augur

The focused repositories give that information which many open source folks tell us is their first line of interest when looking at their own projects. Keeping this conversation going is essential for the CHAOSS project, and for Augur’s utility for helping us identify which metrics map to which use cases and goals. There’s a lot here, and it might give you ideas. Also, as you go through the front end, keep in mind that all of the statistics you see represented as metrics are also available via our Restful API. You can use our data to explore building your own metrics. Or get an app developer to do that for you. Figure 4 provides a high level overview of the metrics representations on Augur that are built off the GitHub API, GHTorrent and Facade’s technology.

Figure 4. There’s a lot here. At the top of the screen you can enter an owner and a repository name to get information about a particular repository. Each of the CHAOSS Metric working groups are represented in tabs at the top of the screen (number 1). The repository you just searched for is listed below the metric category (number 2). The metric name is listed in the title (number 3), and that title corresponds with a CHAOSS metric that is linked below the graphic. These are line graphs, though other visualization styles are readily available, and the line over time is shown by (number 4). The gray area around (number 4) is the standard deviation. (Number 5) is a slider like you see on Google Finance, so you can zoom in on one period of time more closely. Finally, (number 6) has a LOT of different configuration and filtering options you can explore.

 

Figure 5. Here is a WAY zoomed out overview of the Growth, Maturity and Decline metrics you might see on the Augur page. (Number 1) is where you might enter another “owner/repo” combination to compare your repository to. (Number 2) illustrates that sometimes there is no data available from the source we use for a particular metric.

 

Figure 6. This shows you two repositories compared with each other in Augur. Does this fit any of your use cases or goals? How would you make it different? (Number 1) shows what two repositories are being compared. (Number 2) shows the key for knowing which project is which. (Number 3) points out, again, that you can see the CHAOSS definition for the metric any time you like. To the right, you can also see how .json, .csv and .svg representations of the data can be downloaded for you to make whatever use you would like to make of it.)

Our Ask: Goals and Use Cases

Metrics use cases

What are the questions you have about your project? What metrics will help you to make clearer sense of the answer to that question in a productive way?

Give us your use cases

Walk through trying to solve the use case? Where do you get stuck? How might the use case become generalized? If you are expert in OpenStack you can contribute . … you can just describe the use case. Draw out the use cases that you see. We can ask back, why not use metric x and y? And the conversation will really get going!

References

S. Erdelez (1997) Information Encountering: A Conceptual Framework for Accidental Information Discovery. Taylor Graham Publishing, Tampere, Finland.

Click Here for a PDF Version of this Post That is Much Easier to Read

New GrimoireLab release: 18.09-02

By | News

We have a new release of GrimoireLab, 18.09-02, corresponding to grimoirelab-0.1.2 (the main Python package).

This release includes full support Mattermost and GoogleHits, some improvements in the Kibiter UI and panels, some bug fixes and minor new features.

The corresponding packages have been uploaded to pypi (so they’re installable with pip). I’ve tested most of the examples in the GrimoireLab Tutorial with this new release, and everything seems to work. Please, report any problem you may find.

As usual, this release of pypi packages was generated with docker containers, to ensure platform independence. You can install all the packages just with:

$ pip install grimoirelab

Remember that now we also have a new grimoirelab package, that pulls all the Python packages for the release. So, installation is easier, and traceability too: for knowing the GrimoireLab release, just run

$ grimoirelab -v
GrimoireLab 0.1.2

The tag you get (0.1.2 in this case) corresponds to a certain release file (18.09-02 in this case), and specific commits and Python package versions.

We have also produced four Docker images available in DockerHub, all of them with the tags :18.09-02 and :latest. You can pull and run them straight away:

  • grimoirelab/factory: for creating the Python packages
  • grimoirelab/installed: with GrimoireLab installed
  • grimoirelab/full: grimoirelab/installed plus services needed to produce a dashboard, by default produces a dashboard of the CHAOSS project.
  • grimoirelab/secured: grimoirelab/full plus access control and SSL for access to Kibiter

If you want to use or help to debug the containers, have a look at the docker directory in the chaoss/grimoirelab repository.

The list of new stuff is in the NEWS file (check all changes since 18.08-01, which was the latest release with packages in pypi).

CHAOSS at Community Leadership Summit 2018

By | Blog Post

The CHAOSS project aims to develop metrics and software for measuring open source projects. One group of people that care about this are community managers. Every year, Jono Bacon, a CHAOSS Governing Board member who professionalized community management with his book “The Art of Community”, invites community managers to his Community Leadership Summit. (In his book, Jono dedicated the entire chapter 7 to measuring communities.)Judging by the reactions on Twitter and engagement with other conference participants, metrics was a popular topic at the conference. It is no surprise, that members of the CHAOSS project would naturally be at this conference. This blog post summarizes the presence of CHAOSS at the Community Leadership Summit and highlights some takeaways and insights.

Metrics Keynote at Community Leadership Summit 2018

Ray Paik giving his keynote on metrics at the Community Leadership Summit 2018. Picture used with permission from @ShillaSaebi.

 

Ray Paik, a long-time CHAOSS member, gave a keynote titled “looking beyond the numbers”. He addressed why we use metrics, what pitfalls and flaws metrics have, and dos and don’ts of community metrics. Slides are available online.

The CHAOSS Diversity and Inclusion workgroup, specifically Emma Irwin, Sean Goggins, Nicole Huesman, Daniel Izquierdo Cortazar, and Anita Sarma, organized a panel session on the topic of “Establishing Metrics that Matter for Diversity & Inclusion”. Questions discussed during the panel included: How can we safely collect open source project metrics without jeopardizing minority groups and their safety? What metrics can we have about the inclusiveness of software design? What are leadership challenges related to diversity and inclusion and how can metrics help? A major takeaway is that when we create metrics and collect data, we should remember to talk to people who actually face the challenges and not just “professionals or researchers” who may only know somethings about the issues and not the complete picture since they do not face these issues.

“Shaping Inclusive Meritocracy: What do you measure? What do you do?” was the title of an unconference session initiated by Sean Goggins, a CHAOSS Governing Board member. The session drew a good group of people who vividly engaged in conversations and exchanged ideas.

We welcome any and all who learned about CHAOSS during this weekend to join our weekly calls and provide feedback on our metrics and software. The CHAOSS community aims to be useful to community managers, thus we rely on your feedback.

Blogpost written by Georg Link with help from the CHAOSS community.

Call for Feedback!

By | Blog Post

Draft of Goal-Metrics for Diversity & Inclusion in Open Source (CHAOSS)

By Emma Irwin

In the last few months, Mozilla has invested in collaboration with other open source project leaders and academics who care about improving diversity & inclusion in Open Source through the CHAOSS D&I working group. READ MORE

Copyright © 2018 The Linux Foundation® . All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page. Linux is a registered trademark of Linus Torvalds. Privacy Policy and Terms of Use.