Insight's Team work and what it means to you (IFT Projects)

I wanted to address a few things about the IFT Insights Team: How we (should) work and what we’re asking from folks.

Insights Transparency of Work

Insights NCTs are a great place for us (Insights) to describe all of our work and the current status of it. We are failing at updating these and should figure out a way to ensure we keep them up to date. This is the transparency that we (the IFT) have said we’d give, but we’re failing at completing them. My personal apologies here.

Definitions of definitions

I think there’s some confusion across the board on how all these “reporting” initiatives fit into each other. Here’s how I see them:

  • top level is Milestones. These are larger pieces of work that detail big wins for a project. These should sufficiently give anyone a bird’s eye view of what a project is doing on their roadmap. When asked what a project is up to over the course a year, they list their milestones. Here’s what I wrote originally about it when it was introduced, this hasn’t really changed.

  • The next level is where MOST of the confusion kicks in. This is because we’re using a bunch of different words to effectively describe the same thing: epics / FURPS / deliverables / components / etc.

  • each Milestone is said to be “Done” when it completes a set of defined Deliverables. A deliverable is something someone can pick up and run with and not need the project in order to do it.

  • How is this deliverable defined? How do we know what “done” even means? This is where FURPS comes into play. It is the common language we use to define the various requirements (and their types) of a given deliverable. It’s makes “Done” explicit. This allows us to understand the boundaries of how a given deliverable can be used, as well as gives explicit targets for delivery such that everyone is on the same page. Additionally, it allows us to describe improvement of various deliverables. E.g. “We’ve found a way to decrease the latency significantly of protocol X, this allows for A,B,C. Here’s the work required to do it (next Deliverable definition)”

    • so each definition of a deliverable is one of two things: a new functionality within a project (that presumably builds off of some other deliverable), or the extension (improvement) of a previously defined deliverable.
  • The newly discussed Component Inventory, in my head, is simply the list of “Done” deliverables across the IFT. It is an inventory of what we offer as a software organization. If we tack onto that what we’re looking at adding to this inventory, then we have a roadmap of everything we’re doing (software-wise). From that, we can all make good decisions on how best to manage that and push it forward as a collective.

    I hope this clears up some of the misunderstandings or confusion of what the Insights Team is asking from projects, and the benefit we think it will bring to the org over the short and long term. If not, ask questions so we can get down a common understanding.

1 Like

@hegaleon has informed my that my explanation of the component inventory isn’t complete and is more complicated than that as it includes “more parts of the big picture.”

I will sync with him and develop a better story that helps put all the pieces into place so that we all understand.

Nice.

So for Waku FURPS I went in a model where sometimes I have FURPS for:

  1. one milestone, but then I specific which deliverable implement what FURPS. The reason is because there is some overlap sometimes to deliver a functionality.
  2. one deliverable

So this seems more or less aligned with your proposal.

(2) is when we have a new deliverable, and new FURPS
(1) is due to the fact that 1 deliverable brings a list of FURPS, and then a second deliverable bring new FURPS to it (hence the overlap).

I’d be happy to review Waku FURPS and simplify to have a set of FURPS per deliverable. Should I?


Another question I had in mind recently, is that we have some exploratory work, where it’s a single PoC deliverable pushing the needle in a given direction (eg mixnet integration).

But once done by itself, you can’t really define a “Big Win” for the project just yet.

I see 3 approaches to that:

  1. Shove it with similar deliverables in a milestone, feels very artificial. I did that for Explore Peer Discovery Gap
  2. Define a “draft” milestone, that indeed will be a big win, but:
    • The milestone will not be done within the span of a Half-Year, or even year
    • The milestone is likely to change a lot until it’s clearly defined (ie, we get within 6 months of completion)
  3. Have orphan deliverables, that are not part of the milestone just yet, but will be initial work for some future big win. This may be the better plan IMO

Also, Waku seems to be the only one pushing roadmap to https://roadmap.logos.co/

What’s the IFT directive here? And what about non-Logos IFT projects?

1 Like