All Posts By

CHAOSS Community

Metrics With Greater Utility: The Community Manager Use Case

By | News

By Sean Goggins 

Introduction

Community managers take a variety of perspectives, depending on where their communities are in the lifecycle of growth, maturity, and decline. This is an evolving report of what we are learning from community managers, some of whom we are working with on live experiments with a CHAOSS project prototyping software tool called Augur (http://www.github.com/CHAOSS/augur). At this point, we are paying particular focus to how community managers consume metrics and how the presentation of open source software health and sustainability metrics could make them more and in some cases less useful for doing their jobs.

Right now, based on Augur prototypes and follow up discussions so far, we have the following observations that will inform our work both the “Growth Maturity and Decline” working group and in Augur Development. Here are a few things we have learned from prototyping Augur with community managers. These features in Augur are particularly valued:

  1. Allowing comparisons with projects within a defined universe of of projects is essential
  2. Allow community managers to add and remove repositories that they monitor from their repertories periodically
  3. Downloadable graphics
  4. Downloadable data (.csv or .json)
  5. Availability of a “Metrics API”, limiting the amount of software infrastructure the community manager needs to maintain for themselves. This is more valued by program managers overseeing larger portfolios right now, but we think it has potential to grow as awareness of the relatively light weight of this approach becomes more apparent. By apparent, we really mean “easy to use and understand”; right now it is easy for a programmer, but less so for a community manager without this background or current interest.

Date Summarized Comparison Metrics

With these advantages in mind, making the most of this opportunity to help community managers with useful metrics is going to include the availability of date summarized comparison metrics. These types of metrics have two “filters” or “parameters” fed into them that are more abstractly defined in the Growth, Maturity, and Decline metrics on the CHAOSS project.

  1. Given a pool of repositories of interest for a community manager, rank them in ascending or descending order by a metric
  2. Over a specified time period or
  3. Over a specified periodicity (e.g., month) for a length of time (e.g., year).

For example, one open source program officer we talked with is interested in the following set of date summarized comparison metrics. Given a pool of repositories of interest to the program officer (dozens to hundreds of repositories):

  1. What ten repositories have the most commits this year (straight commits, and lines of code)?
  2. How many new projects were launched this year?
  3. What are the top ten new repositories in terms of commits this year (straight commits, and lines of code)?
  4. How many commits and lines of code were contributed by outside contributors this calendar year? Organizationally sponsored contributors?
  5. What organizations are the top five external contributors of commits, comments, and merges?
  6. What are the total number of repository watchers we have across all of our projects?
  7. Which repositories have the most stars? Of the ones new this year? Of all the projects? Which projects have the most new stars this year?

Open Ended Community Manager Questions to Support with Metrics

There are other, more open ended questions that may be useful to open source community managers:

  1. Is a repository active?
    1. Visual differentiation that examines issue and commit data
    2. Activity in the past 30 days
    3. Across all repositories, present the 50th percentile as a baseline and show repositories above and below that line
  2. Should we archive this repository?
    1. Enable an input from the manager after reviewing statistics
    2. Activity level, inactivity level and dependencies
    3. Mean/Median/Mode histogram for commits/repo
  3. Should we feature this repository in our top 10? (Probably a subjective decision based on some kind of composite scoring system that is likely specific to the needs of every community manager or program office.)
  4. Who are our top authors? (Some kind of aggregated contribution ranking by time period [year, month, week, day?]. nominally, I have a concern about these kinds of metrics being “gameable”, but if they are not visible to contributors themselves, there is less “gaming” opportunity.)
  5. What are our top repositories? (Probably a subjective decision based on some kind of composite scoring system that is likely specific to the needs of every community manager or program office.)
  6. Most active repositories by time period [year, month, week?]. Activity to be revealed through a mix of retention and maintainer activity primarily focusing on the latter. Number of issues and commits. Also the frequency of pull requests and the number of closed issues.
  7. Least active repositories by time period [Week? Month? Year?]. Bottom of scores calculated, as above.
  8. Who is our most active contributor (Some kind of aggregated contribution ranking by time period [year, month, week, day?]. nominally, I have a concern about these kinds of metrics being “gameable”, but if they are not visible to contributors themselves, there is less “gaming” opportunity.)
  9. What new contributors submitted their first new patches/issues this week? (Visualization Note: New contributors can be colored in visualizations and then additionally a graph can be made for number of new contributors)
  10. Which contributors became inactive? (Will need a mechanism for setting “inactive” thresholds.)
  11. Baseline level for the “average” repository in an organization and for each, individual organization repository.
  12. What projects outside of a community manager’s general view (GitHub organization or other boundary) do my repositories depend on or do my contributors also significantly contribute to?
  13. Build a summary report in 140 characters or less. For example, “Your total commits in this time period [month, week?] across the organization increased 12% over the last period. Your most active repositories remained the same. You have 8 new contributors, which is 1 below your mean for the past year. For more information, click here.”
  14. Once a metrics baseline is established, what can be done to move them? [^1]
  15. Are there optimal measures for some metrics?
    1. Pull request size?
    2. Ratio of maintainers to contributors?
    3. New contributor to consistent contributor ratio?
    4. New contributor to maintainer ratio?

Augur Specific Design Change Recommendations

Next is a list of Augur specific design changes suggested thus far, based on conversations with community managers.

  1. Showing all of the projects in a GitHub organization in a dashboard by default is generally useful.
  2. Make the lines more clear in the charts, especially when there are multiple lines in comparison
  3. How to zoom in and out is not intuitive. In the case of Google Finance, for example, a default, subset period was displayed when they used the “below the line mirrored line” interface this is modeled after. That old model makes it fairly clear that the ability to adjust the range of dates is what that box below the line in google finance is for. Alternately, Google’s more updated way of representing time, providing users choices, and showing comparisons may be even more useful and engaging. In general, its important that the time zooming is more clear.
    In one view, Google lets you see a 1 year window of a stock's performance.

    Figure 1: In one view, Google lets you see a 1 year window of a stock’s performance.

     

    In another view, you can choose a 3 month period. Comparing the two time periods also draws out the trend with red or green colors, depending on whether or not the index, in this case a stock's price, has increased or decreased overall during the selected time period.

    Figure 2: In another view, you can choose a 3 month period. Comparing the two time periods also draws out the trend with red or green colors, depending on whether or not the index, in this case a stock’s price, has increased or decreased overall during the selected time period.

     

    Comparisons are similarly interesting in Google's finance interface. You can simply add a number of stocks in much the same way our users want to add a number of different repositories.

    Figure 3: Comparisons are similarly interesting in Google’s finance interface. You can simply add a number of stocks in much the same way our users want to add a number of different repositories

  4. For the projects a community manager chooses to follow, go ahead and give them comparison check-boxes at the top of the page. I think from a design point of view, we should limit comparisons as discussed, to 7 or 8, simply due to the limits in human visual perception.
  5. The ability to adjust the viewing windows to a month summary level is desired.
  6. Right now, Augur does not make it clear that metrics are, by default, aggregated by week.
  7. New contributor response time. When a new contributor joins a project, what is the response time for their contribution?
  8. A graph **comparing** commits and commit comments on x and y axes **between projects** is desired. Same with Issue and Issue comments.
  9. In general, the last two years of data gets the most use. We should focus our default display on this range.

Data Source Trust Issues

  1. Greater transparency of metrics data origins will be helpful for understanding discrepancies between current understanding and what metrics show.
    1. We should include some detailed notes from Brian Warner about how Facade is counting lines of code, and possibly some instrumentation to enable those counts to be altered by user provided parameters.
    2. Outside contributor organization Data. One community manager reported that their lines of code by organization data seems to look wrong. I did explain that these are mapped from a list of companies and emails we put together, and getting this right is something community managers will need some kind of mapping tool to do. GitDM is a tool that people sometimes use to create these maps, and Augur does follow a derivative of that work. It is probably the case that maintaining these affiliation lists is something that needs to be made easier for community managers, especially in cases where the number of organizations contributing to a project is diverse (there is a substantial range among community managers we spoke with. Some are managing complex ecosystems involving mostly outside contributors. Most are in the middle. And some of contributor lists highly skewed toward their own organization.)
  2. GHTorrent data, while excellent for prototyping, faces some limitations under the scrutiny of community managers. For example, when using the cloned repositories, and then going back to *issues*, the issues data in GHTorrent does not “look right”. I think the graph API might offers some possibilities for us to store issue statistics we pull directly from GitHub and update periodically as an alternative to GHTorrent.
  3. When issues are moved from an older system, like Gerrit, into GitHub issues, in general, the statistics for the converted issues are dodgy, even through the GitHub API. We are likely to encounter this, and at some point may want to include Gerrit data in a common data structure with issues from GitHub and other sources.

New Metrics Suggested

  1. Add metric “number of clones”
  2. “Unique visitors” to a repository is a data point available from the GitHub API which is interesting.
  3. Include a metric that is a comparison of the ratio of new committers and total committers in a time period. Or, perhaps simply those two metrics in alignment. Seeing the number of new committers in a set of repositories can be a useful indication of momentum in one direction or another; though I hasten to add that this is not canonically the case.
  4. Some kind of representation of the ratio between commits and lines of code per commit
  5. Test coverage within a repository is something to consider measuring for safety critical systems software.
  6. Identifying the relationship between the DCO (Developer Certificate of Origin) and the CLA (Contributor License Agreement).
  7. There is a tension between risk and value that, as our metrics develop in those areas, we are well advised to keep in mind.
  8. The work that Matt Snell and Matt Germonprez at the University of Nebraska-Omaha are starting related to risk metrics is of great interest. Getting these metrics into Augur is something we should plan for as soon as reasonably possible.

Design Possibilities

Augur

For Augur, I think the interface changes that enable comparisons and adjust the level of self apparent ways to compress or expand the time, as per the Google examples, are at the top of the list of things that will make Augur more useful for community managers. Feedback on these notes will be helpful. I think the new committers to committers ratio is important, as well as enabling comparisons across projects in the bubble graphs as well. Transparency of data sources and limitations of data sources for both the API and the front end, which are above average but not complete, are important.

Growth Maturity and Decline Working Group

Many of the metrics of interest to community managers fall under the “Growth Maturity and Decline” working group. From a design perspective it appears that, possibly, the way that metrics are expressed and consumed by these stakeholders in their individual derivatives of the community manager use case is quite far removed from the detailed definition work occurring around specific metrics. Discussion around an example implementation like Augur is helping draw out some of this more “zoomed out” feedback. The design of system interfaces frequently includes the need to navigate between granular details and the overall user experience [@zemel_what_2007; @barab_our_2007]. This is less of a focus in the development of software engineering metrics, though recent research is beginning to illustrate the criticality of visual design for interpreting analytic information [@gonzalez-torres_knowledge_2016].

References

  • Barab, S, T Dodge, MK Thomas, C Jackson, and H Tuzun. 2007. Our designs and the social agendas they carry. Journal of the Learning Sciences 16 (2): 263-305.
  • Gonzalez-Torres, Antonio, Francisco J. Garcia-Penalvo, Roberto Theron- Sanchez, and Ricardo Colomo-Palacios. 2016. Knowledge discovery in software teams by means of evolutionary visual software analytics. Sci- ence of Computer Programming 121: 55{74. doi:10.1016/j.scico.2015.09.005. https://linkinghub.elsevier.com/retrieve/pii/S0167642315002658.
  • Zemel, Alan, Timothy Koschmann, Curtis LeBaron, and Paul Feltovich. 2007. What are we Missing? Usability’s Indexical Ground. Computer Supported Cooperative Work.

Acknowledgements

Many members of the CHAOSS community contributed to this report and analysis. I am happy to share names with permission from the contributors, but I have not requested permission as of the publication date.

[^1]: Once we are to this point, I think CHAOSS is kicking butt and taking names.

A PDF Version of this Post is Available Here.

Open Community Metrics and Privacy: MozFest’18 Recap

By | Blog Post

By: Raniere Silva, Software Sustainability Institute, and Georg Link, University of Nebraska at Omaha.

This post was originally published here and on Software Sustainability Institute blog .

Image by Raniere Silva.

TL;DR

Open communities lack a shared language to talk about metrics and share best practices. Metrics are aggregate information that summarise raw data into a single number, stripping away any context of data. Pedagogical metric displays are an idea for metrics that include an explanation and educates the user on how to interpret the metric. Metrics are inherently biased and can lead to discrimination. Many problems brought up during the MozFest session are worked on in the CHAOSS project.

Introduction

Mozilla Festival, or MozFest, is an annual event organized by Mozilla volunteers to join thinkers and champions of the open internet. The event is located at the Ravensbourne University, Greenwich, London, United Kingdom, and hosts a safe space for attendees to talk about decentralization, digital inclusion, openness, privacy, security, web literacy and much more.

At MozFest this year, we organised a 90-minute conversation on the topic of open community metrics and privacy. We seeded the conversation by sharing past examples (e.g., we are 1,000,000 Mozillians) asked our nine participants to discuss what it means to aggregate metrics for open communities and what impact this can have for (non-)members of these communities. The following sections elaborate on thoughts, ideas, and discussions from the session.

What are Open Community Metrics?

The MozFest conversation revealed that we are lacking a shared language for talking about open community metrics. Our session is indicative that the problem exists across open communities. Maybe it is because open community metrics have not been standardised, have not been around for long yet, or that every community has different ways of dealing with them. To resolve this, we discussed the meaning of what is an open community and what are metrics.

We compared community to networks. A network consists of people who are connected to each other. In contrast, members of a community have a shared goal and purpose, often creating something together. Communities build a shared identity, and membership in the community is an alignment of one’s identity. Adding in the ‘open’, an open community is one that uses communication tools that displays the conversations to anyone who wants to inspect them. An open community does not always welcome everyone as a member (inclusive governance) but it operates in a transparent way.

We contrasted metrics and data. Data is the raw information about who wrote a message to whom and what was the content of the message. The data needs to be enriched before it becomes information, such as how active a specific user is or how popular a certain topic is within the open community. Metrics are aggregates of those information and summarise the information into a single number or graph. Metrics is the information you would expect to see on a dashboard that summarises aspects of your open community.

Open Community Data Example

The following example from a fictional open source project demonstrates how much information about participants in open communities is often available online. Such data can be aggregated by software, such as Perceval that retrieves and gathers data from different places where open source contributors interact, for example GitHub, Mailing-lists, Discourse, Slack, and Twitter. Due to the open nature of the community, much personal data is made freely available. For example, a Git commit retrieved by Perceval could be:

 

commit 944713e38290d8dd4a9ac7f02267ada72a0e5a10

Author: Jane Doe <jane.doe@mail.com>

Date:   Mon Oct 29 14:56:59 2018 +0000

   Adicionado arquivo README.md

 

The commit contains Jane’s full name and email address. Even if Jane was a very private person and, except for their name and email, no other information about them was available online, we can narrow the search for the real person based on the Portuguese commit message, timestamp, and timezone. With access to other databases, we could enhance Jane’s profile more and with a higher degree of accuracy and precision.

How can we protect Jane’s privacy when open source communities are collecting, processing, aggregating and analyzing data like the one that we describe? And how open source communities can stay true to their principles by making the data used on the analysis open (or FAIR – findability, accessibility, interoperability, and reusability) without violating Jane’s privacy?

Sharing how Metrics Should be Understood

Now that we have a better shared understanding of what metrics are, we can discuss the intricacies of understanding the meaning behind metrics. A problem with metrics is that they only provide a single number or graph with no background information on how the data was collected, how it was summarized, and what were the intentions of the creators of a metric. This can lead to misunderstandings. For example, when number-of-messages is used as a metric to gauge popularity in a topic, then we need to understand how conventions unfold in an open community and the tool used. A tool that has thumb-up reactions or votings for messages provides a way to show interest in a topic and interact with messages; whereas the count of messages in a tool that is lacking those features and only has text messages (e.g. mailing lists) will be inflated because it counts simple “+1” messages.

Another problem with metrics is that they are inherently biased. This runs counter to a popular belief that numbers are objective. At the foundation of metrics are data, which is biased because someone decided what data to collect and how to store it. The bias in data limits what information can be summarized by metrics. Furthermore, a metric reduces data to a single number or graph and as such removes detail and context. In this process, some data is weighted more than other data or not included at all. How data is summarized is determined by the metric creator, though what is done to the data is not obviously clear to a metric user.

These problems point to a need of bridging between metric creator and metric user. A solution we discussed at MozFest is to provide an explanation with every metric. The person or team that prepares a metric for an open community would be responsible for including a description to help with interpreting the metric. We might call this concept of a metric that includes an explanation and educates the user on its use a ‘pedagogical metric display’.

Metrics by Community vs. Metrics about Community

Open community metrics can be created by open communities about themselves, or by externals. Metrics created by people external to an open community have issues that stem from their external perspective. For example, they may not know the meaning behind interaction patterns of a community and as such may choose to summarize data that is not aligned with how the data came to be. Externals also might not be familiar with the goals and identity of an open community and thus create metrics that are misaligned. To resolve these issues, metrics might be better created by open communities. Pedagogical metric displays could bridge between internal metric creators and external metric users.

An interesting question is why metrics are created for open communities. A small open community typically does not have metrics and we only observe metrics being introduced in large and growing communities. An intuitive answer could be that in a small community it is easier to maintain a sense of how the community is doing but in a larger community it becomes infeasible to keep track of everything at the interaction or data level and thus people aggregate the data to metrics to maintain this sense of knowing how a community is doing. A follow-on question is whether communities use metrics to understand themselves better, or whether communities use metrics to drive their growth. For example, knowing and ensuring that new users who make their first post in a community forum receive friendly welcome replies can help a community thrive. Conversely, a daily notification to maintainers that sums up how many new posters did not receive a reply within 48 hours is in itself not helpful because it only provides a negative metric without actionable insights.

A question raised at MozFest is about creating and sharing metrics. By this, we mean the process of calculating a metric. For example, one open community might come up with a metric that serves the community well and other communities are interested in using the same metric for themselves. The immediate practical question is how metrics can be transferred between contexts and adapted for different tools that these communities might be using. An ideological question is whether the metric will actually be useful to the second community.

Who is (Dis)Advantaged by Open Community Metrics?

This section explores the implications of privacy in the context of open community metrics. First, anyone who participates in an open community can be tracked and profiled. A user may try to maintain a sense of privacy by using a pseudonym, a unique email address for each open community, and avoid revealing personal information. How successful these approaches are is subject to further study, but examples exist for failing to maintain privacy in online spaces.

Another problem with open community metrics is how they are being used. A funding agency or other benefactor might want to see open community metrics that demonstrate impact or outcomes. This seems like a simple case where an open community that is tracking metrics and can show how it is producing outcomes or making an impact has a leg up in the struggle for financial support. Producing metrics could become a matter of making an open community sustainable.

Other uses of metrics are also possible. An open community can use metrics to identify areas where community members are struggling. Identifying and fixing a bug in a community platform improves the experience. However, a problematic use of metrics could come from outside a community. What if a community of runners, who share their tracked routes, heart rates, best times, and who they ran with, was analysed by an insurance company to identify new ways to structure insurance policies for people who run regularly (and consequently discriminates against parts of the community).

We have heard that membership in some open communities is a differentiating factor for job applicants. A developer with an active GitHub or StackOverflow profile could have a higher chance of landing a new job. Conversely, a user who posts pictures of drink parties in an open community could be disadvantaged in the hiring process. Granted, these are less issues of open community metrics and more about the data of open communities being available to external people. However, an open community can aid such discriminating use cases by providing metrics at the individual person level (ever wonder who looks at the green dots on GitHub and for what purposes?). The goal of metrics on the level of the individual might be to incentivize members of a community to participate more and behave in certain ways (e.g., avoid negative brownie points). But those same metrics could become signals and be used in unexpected ways. Naturally, people will begin to game a metric when it is displayed.

With the assumption that open community metrics are used in decision making (e.g., hiring, insurance premiums, credit reporting, national citizen score, …) the people who do not participate in a community might be naturally disadvantaged. Some people may choose to avoid open communities and stay anonymous. However, some people are lacking internet access, do not speak (fluently) the dominant language in their community, or do not have the time besides job and family to participate. These people are disadvantaged for reasons outside of the open community and the use of metrics is unfair from the onset.

Conclusion

The MozFest session was an exchange. We know from the participants that each of them took away something from the conversation. The central discussion points are summarized here. As facilitators, we see MozFest to be a mirror of the conversation that is happening in CHAOSS. With this blog post, we hoped to document some of the discussion and provide a baseline from which we can move forward. Many of the problems brought up in the MozFest session and documented here are problems that the CHAOSS project are working on (I know, shameless plug for CHAOSS).

Thanks!

We are thankful to the attendees that help us to brainstorm open community metrics and privacy.

Reflections on CHAOSScon NA 2018

By | Blog Post

By Alexandre Courouble and John Hawley

This blog post originally appeared on the VMware Open Source Blog on October 9, 2018.

Previously, we’ve explored the challenge of measuring progress in open source projects and looked forward to the recent CHAOSScon meeting, held right before the North American Open Source Summit (OSS). CHAOSS, for those who may not know, is the Community Health Analytics Open Source Software project. August’s CHAOSScon marked the first time that the project had held its own, independent pre-OSS event.

After attending the event, we thought it might be interesting to share our takeaways from the conference and reflect on where we stand with regard to the challenges that we outlined both in our previous posts and in our CHAOSScon talk, “The Pains and Tribulations of Finding Data.”

For those who weren’t able to attend CHAOSScon and would like to see our talk, it’s now available for viewing here. We started with an overview of current solutions for gaining visibility into open source data and then outlined what we view as the challenges currently standing in the way of creating solid progress metrics for open source development. We’ll go back to the talk at the end of the post, but first, our overall takeaways.

One thing we appreciated about having a dedicated CHAOSScon was that the event attracted a mix of longtime colleagues and collaborators as well as people new to the community. In particular, there was a strong presence from corporate open source teams. Engineers from Twitter, Comcast, Google and Bitergia shared how they have been tackling different kinds of open source data challenges. Hearing about their own trials and tribulations definitely seemed to validate our impression that we share a number of basic data measurement problems that are worth addressing as a community.

It was good, too, to see CHAOSS welcoming these corporate perspectives. Open source conferences often eschew that kind of engagement, but it is useful to hear how teams are solving problems for themselves out in the wild. Here’s hoping that this marks the start of a new trend.

A pair of workshops in the afternoon offered another useful takeaway. One was on “Establishing Metrics That Matter for Diversity & Inclusion” and the other was a report from the CHAOSS working group on Growth-Maturity-Metrics.

It was clear from the latter workshop that we now have a good number of quantifiable data points to establish where a project is on the growth-maturity-decline continuum. But diversity metrics present a much trickier challenge. The data there exists mostly in mailing lists and board discussions and is currently only really explored through surveys. But the issue provoked a really interesting discussion full of smart suggestions and we’re excited to see what new solutions the community will come up with in the future.

Turning to what we learned from our own panel, we were thrilled to be speaking in front of a similarly engaged audience. We opened with a shout out to the open source projects that have already created tooling around data acquisition and we were lucky enough to have maintainers from many of those projects in the room with us. It was good of our audience to indulge a presentation heavy on questions and light on answers. They seemed genuinely curious about the issues we were raising and interested in trying to figure out how to fundamentally address them – some even started working on potential solutions as we were speaking.

We didn’t arrive at any grand consensus on solutions, but it’s clear that there is active community interest in trying to at least understand the problem of open source metrics and how we might be able to solve it. That’s certainly inspiring us to keep working on the issue—after all, things will only get better as more ideas get discussed, researched, tried and retried. This is not something we expect to be magically fixed in a couple of steps, but we’re excited to keep reaching out to the colleagues we interacted with at the conference and see what develops.

Our final takeaway is a classic example of conference serendipity. We arrived there knowing about GrimoireLab, a tool for tracking data about multiple open source projects on a single dashboard—we even referenced it in our talk. But what we didn’t know is that it’s easy to create your own implementation of it. We attended a presentation where several groups shared how they had implemented GrimoireLab with success and we’re now implementing it internally ourselves to track the status of our open source projects. Talk about a win-win situation.

Stay tuned to the VMware Open Source Blog for future conference recaps and follow us on Twitter (@vmwopensource).

‘Helpful and Useful – The Open Source Software Metrics Holy Grail’

By | Blog Post

By Sean Goggins

My colleague Matt Germonprez recently hit me and around 50 other people at CHAOSScon North America (2018) with this observation:

“A lot of times we get really great answers to the wrong questions.”

Matt explained this phenomena as “type III error”, an allusion to the more well known statistical phenomena of type I and type II errors. If you are trying to solve a problem or improve a situation, sometimes great answers to the wrong questions can still be useful because in all likelihood somebody is looking for the answer to that question! Or maybe it answers another curiosity you were not even thinking about. I think we should call this (Erdelez, 1997). There’s an old adage:

“Even a blind squirrel finds a nut every once in a while.”

For open source professionals a “Blind Squirrel” is little more than the potential name for a Jazz trio, and probably not the right imagery for explaining to your boss that you’re “working on open source metrics”. Yet these blind squirrels will encounter nuts a LOT more often if we make more nuts! “Metrics are nuts!”. Not a good slogan, but that’s my metaphor. Making more metrics is easy for us because we have lots of data, we write software, and it stands to reason that more is going to generate more useful metrics. If you are the blind squirrel, its useful to find metrics.

Can you imagine all the useful things blind squirrels would find if we let them loose in an Ikea? “I came for the Swedish meatballs, I left with 2 closet organizing systems and a new kitchen”! A lot of things are useful, but in order for something to be helpful it needs to help you meet an important goal. To summarize:

Useful: Of all the different things I find in the Ikea, many of them are useful. Or, there are 75 metrics on this dashboard, and 3 of them are useful!

Helpful: You go into the endeavor with a goal, and leave with 3 metrics that help you achieve that goal. Or, you’re a blind squirrel that just ordered nuts online from Ikea.

Open Source Software Health Metrics: Lets go Crazy! Lets Get Nuts!

Great answers to the wrong questions are more commonplace than we prefer because open source software work is evolving quickly and we do not yet have a list of the right questions for many specific project situations. Lets refer to questions as “metrics” now. Questions and metrics are nuts! Still a terrible slogan. Sometimes we do not know the question-metric-nut and foraging through a forest of metrics is, if not helpful, a way to reduce the rising anxiety we feel when we are not sure what data helps to support our explanation of what is happening in a project ecosystem. So, if like me and dozens of others working in and around the CHAOSS project, you are trying to achieve a goal for your project there are two orthogonal, strategic starting points our colleague in CHAOSS, Jesus M. Gonzalez-Barahona, suggests:

  1. Goals: What are metrics going to help you accomplish?
  2. Use Cases: When you go to use metrics, what are the use cases you have? A case can be simple, ill formed and even ’unpretty’:
    1. “My manager wants to know if anyone else is working on this project?”
    2. “It seems like my community is leveling off? Is it? Or is it just so large now I cannot tell?”

Taking Action by Sharing Goals and Use Cases

Having a yard full of nuts to sort through can help you work toward the nuts you want. OK. The nut metaphor has gone too far. We are looking to use software, provided as a prototype and an example to help talk through the details of use cases you name. With you. The use cases of open source developers, foundations, community managers and others use to evaluate open source software health and sustainability metrics are probably a manageable number.

We can give you some metrics to work with quickly using the CHAOSS sponsored metrics prototyping tool Augur.

What are we trying to accomplish with metrics? With Augur? One of our goals is to make it easier for open source stakeholders to “get their bearings” on a project and understand “how things are going”. We think that’s most easily accomplished when comparisons to your own project over time, and other projects you are familiar with are readily available. Augur makes comparisons central.
Building Helpful Metrics

If you have already shared a list of repositories you are interested in with us, here’s what you have;

  1. an Augur site with those repos
  2. The opportunity to look at that site and help the whole CHAOSS community know:
    1. What use cases which particular metrics help you address
    2. What goals you have that could be met by something like Augur, but you cannot meet yet
    3. Something to hate. If you’ve ever been to an NHL game, you know that hating the other team is how we show our team we love them. Its also a good brainstorming device.

So, OK. What do you want?

We want the opportunity speak with you about your goals, use cases, and the failings of tools currently at your disposal for “getting there”. If you’re feeling adventurous, I would like to be able to reference our conversations (anonymously) in research papers, because research papers are kind of the “code of the academic world”. That’s less important.

An Augur Experiment

AUGUR

If you do not have a list of repositories you have already shared with us, there are a few examples here: http://www.augurlabs.io/live-examples/.

Design Goals

The version of Augur that’s currently deployed has several design goals that seek to provide useful information through comparison within a project (over time) and across projects. The most fundamental metrics people are interested in include

What individuals committed the most lines of code in a time period?
From what companies or other organizations are the individuals who committed the most lines of code in a time period?
Derivative of the first two: Is this changing? Did I lose anyone? Who can this project NOT afford to lose?

Projects You Care About

Figure 1 is an example from Twitter, which shows an instance of Augur configured for all of the repositories in the Twitter ecosystem. When you go to http://twitter.augurlabs.io you get the list of repositories that you see in figure 1.

Figure 1. When you follow the URL above, or your own URL, you will see a list of repositories that we have cloned, and using the technology behind “Facade”, a tool written by Brian Warner, calculated all the salient, basic, individual repository information about. Here’s a list of those repositories.

 

Looking at my projects

When I look at the most basic data for one of my repositories, I have enough information to answer the most basic questions about it (See above). Figure 2 and Figure 3 illustrate the Augur pages you will see at the next level of “drill down”. Try clicking the months for even more information! Keep in mind this is ONLY the information for the repositories you shared with us, or the repositories part of one of our other live examples.

Figure 2. You can see the lines of code from the top two authors, as well as the space inefficient Augur tool bar. Please contact me if you have tips and tricks for getting developers to be more comfortable with putting aesthetics behind utility in web page design. I will buy you a case of beer.

 

Figure 3 is a second image of the same page, but scrolled down just far enough to see that you can look at the top ten contributors as well as the top organizational contributors. We used a list of over 500 top level domains, as well as tech companies we were able to “guess” to start to resolve even these prototypes to specific companies. We did this because Amye asked us to, and we’re really gunning to make Gluster have more lustre. As if that’s possible.

Figure 3. A more detailed look at some of the information available on a repository by repository basis in Augur. We also show you the organizational affiliation information.

Explore the Rest of Augur

The focused repositories give that information which many open source folks tell us is their first line of interest when looking at their own projects. Keeping this conversation going is essential for the CHAOSS project, and for Augur’s utility for helping us identify which metrics map to which use cases and goals. There’s a lot here, and it might give you ideas. Also, as you go through the front end, keep in mind that all of the statistics you see represented as metrics are also available via our Restful API. You can use our data to explore building your own metrics. Or get an app developer to do that for you. Figure 4 provides a high level overview of the metrics representations on Augur that are built off the GitHub API, GHTorrent and Facade’s technology.

Figure 4. There’s a lot here. At the top of the screen you can enter an owner and a repository name to get information about a particular repository. Each of the CHAOSS Metric working groups are represented in tabs at the top of the screen (number 1). The repository you just searched for is listed below the metric category (number 2). The metric name is listed in the title (number 3), and that title corresponds with a CHAOSS metric that is linked below the graphic. These are line graphs, though other visualization styles are readily available, and the line over time is shown by (number 4). The gray area around (number 4) is the standard deviation. (Number 5) is a slider like you see on Google Finance, so you can zoom in on one period of time more closely. Finally, (number 6) has a LOT of different configuration and filtering options you can explore.

 

Figure 5. Here is a WAY zoomed out overview of the Growth, Maturity and Decline metrics you might see on the Augur page. (Number 1) is where you might enter another “owner/repo” combination to compare your repository to. (Number 2) illustrates that sometimes there is no data available from the source we use for a particular metric.

 

Figure 6. This shows you two repositories compared with each other in Augur. Does this fit any of your use cases or goals? How would you make it different? (Number 1) shows what two repositories are being compared. (Number 2) shows the key for knowing which project is which. (Number 3) points out, again, that you can see the CHAOSS definition for the metric any time you like. To the right, you can also see how .json, .csv and .svg representations of the data can be downloaded for you to make whatever use you would like to make of it.)

Our Ask: Goals and Use Cases

Metrics use cases

What are the questions you have about your project? What metrics will help you to make clearer sense of the answer to that question in a productive way?

Give us your use cases

Walk through trying to solve the use case? Where do you get stuck? How might the use case become generalized? If you are expert in OpenStack you can contribute . … you can just describe the use case. Draw out the use cases that you see. We can ask back, why not use metric x and y? And the conversation will really get going!

References

S. Erdelez (1997) Information Encountering: A Conceptual Framework for Accidental Information Discovery. Taylor Graham Publishing, Tampere, Finland.

Click Here for a PDF Version of this Post That is Much Easier to Read

New GrimoireLab release: 18.09-02

By | News

We have a new release of GrimoireLab, 18.09-02, corresponding to grimoirelab-0.1.2 (the main Python package).

This release includes full support Mattermost and GoogleHits, some improvements in the Kibiter UI and panels, some bug fixes and minor new features.

The corresponding packages have been uploaded to pypi (so they’re installable with pip). I’ve tested most of the examples in the GrimoireLab Tutorial with this new release, and everything seems to work. Please, report any problem you may find.

As usual, this release of pypi packages was generated with docker containers, to ensure platform independence. You can install all the packages just with:

$ pip install grimoirelab

Remember that now we also have a new grimoirelab package, that pulls all the Python packages for the release. So, installation is easier, and traceability too: for knowing the GrimoireLab release, just run

$ grimoirelab -v
GrimoireLab 0.1.2

The tag you get (0.1.2 in this case) corresponds to a certain release file (18.09-02 in this case), and specific commits and Python package versions.

We have also produced four Docker images available in DockerHub, all of them with the tags :18.09-02 and :latest. You can pull and run them straight away:

  • grimoirelab/factory: for creating the Python packages
  • grimoirelab/installed: with GrimoireLab installed
  • grimoirelab/full: grimoirelab/installed plus services needed to produce a dashboard, by default produces a dashboard of the CHAOSS project.
  • grimoirelab/secured: grimoirelab/full plus access control and SSL for access to Kibiter

If you want to use or help to debug the containers, have a look at the docker directory in the chaoss/grimoirelab repository.

The list of new stuff is in the NEWS file (check all changes since 18.08-01, which was the latest release with packages in pypi).

CHAOSS at Community Leadership Summit 2018

By | Blog Post

The CHAOSS project aims to develop metrics and software for measuring open source projects. One group of people that care about this are community managers. Every year, Jono Bacon, a CHAOSS Governing Board member who professionalized community management with his book “The Art of Community”, invites community managers to his Community Leadership Summit. (In his book, Jono dedicated the entire chapter 7 to measuring communities.)Judging by the reactions on Twitter and engagement with other conference participants, metrics was a popular topic at the conference. It is no surprise, that members of the CHAOSS project would naturally be at this conference. This blog post summarizes the presence of CHAOSS at the Community Leadership Summit and highlights some takeaways and insights.

Metrics Keynote at Community Leadership Summit 2018

Ray Paik giving his keynote on metrics at the Community Leadership Summit 2018. Picture used with permission from @ShillaSaebi.

 

Ray Paik, a long-time CHAOSS member, gave a keynote titled “looking beyond the numbers”. He addressed why we use metrics, what pitfalls and flaws metrics have, and dos and don’ts of community metrics. Slides are available online.

The CHAOSS Diversity and Inclusion workgroup, specifically Emma Irwin, Sean Goggins, Nicole Huesman, Daniel Izquierdo Cortazar, and Anita Sarma, organized a panel session on the topic of “Establishing Metrics that Matter for Diversity & Inclusion”. Questions discussed during the panel included: How can we safely collect open source project metrics without jeopardizing minority groups and their safety? What metrics can we have about the inclusiveness of software design? What are leadership challenges related to diversity and inclusion and how can metrics help? A major takeaway is that when we create metrics and collect data, we should remember to talk to people who actually face the challenges and not just “professionals or researchers” who may only know somethings about the issues and not the complete picture since they do not face these issues.

“Shaping Inclusive Meritocracy: What do you measure? What do you do?” was the title of an unconference session initiated by Sean Goggins, a CHAOSS Governing Board member. The session drew a good group of people who vividly engaged in conversations and exchanged ideas.

We welcome any and all who learned about CHAOSS during this weekend to join our weekly calls and provide feedback on our metrics and software. The CHAOSS community aims to be useful to community managers, thus we rely on your feedback.

Blogpost written by Georg Link with help from the CHAOSS community.

Call for Feedback!

By | Blog Post

Draft of Goal-Metrics for Diversity & Inclusion in Open Source (CHAOSS)

By Emma Irwin

In the last few months, Mozilla has invested in collaboration with other open source project leaders and academics who care about improving diversity & inclusion in Open Source through the CHAOSS D&I working group. READ MORE

CHAOSS at Open Source Leadership Summit 2018

By | News

Want to know more about Community Health and Analytics? Join CHAOSS at the Open Source Leadership Summit March 6th to 8th, 2018.

The CHAOSS community was formed as result of a Birds-of-a-Feather on Community Health Analytics at the Open Source Leadership Summit 2017. Come see what we have been up to and be a part of defining and creating tools to analyze community health.

Past Events

Open Source Summit Europe (October 23 – 26, 2017: Prague, Czech Republic)

  • CHAOSS project breakout session; Tuesday (Oct. 24th) at 12pm – 5pm local time; Room London
CHAOSS group picture at OSSEU2017.

CHAOSS group picture at OSSEU2017.

Open Source Summit North America (September 11-14, 2017: Los Angeles, CA)

CHAOSS group picture at OSSNA2017.

CHAOSS group picture at OSSNA2017.

CHAOSSCon + GrimoireCon Europe 2018

By | News

Meet the CHAOSS and GrimoireLab community in Brussels, Belgium on February 2nd, 2018. Come be a part of building, defining, and using tools for open source communities to track and analyze their development activities, community health, and diversity.

CHAOSSCon + GrimoireCon Europe will highlight CHAOSS and GrimoireLab updates, use cases, and feature hands-on workshops for developers, community managers, and project managers.

The workshops will cover the basic training for using open source GrimoireLab toolkit for analyzing software development processes to manage them through metrics and KPIs.

Community managers, software development managers, developers, and anyone involved in open source and inner source software development will learn through real examples how to set up and use GrimoireLab for their specific needs.