Newest Dataset (Oct 18th) (Challenging!)

A few reactions to the newest dataset uploaded today. (But first a thank you for new data.)

This is a very challenging set of movies.

1.) The focus seems to be off a little more than normal. The vessels seem a little more blurred.
2.) I am having more difficulty identifying a single targeted vessel. Many movies seem to have multiple vessels that could be targeted by the outline. This will create much more variability in the flowing versus stalled responses since we will each be assuming a different vessel as the target.
3.) If you are a new player, do the best you can. Hopefully, the next dataset will get back to normal and be easier to analyze.

Doing the best I can…

@gcalkins

1 Like

… me too, Guy. I was also concerned for new players. This data set is humbling. It’ll sure keep me from getting cocky. Time to take a break and do some painting. Best of luck!
Mike

I was wondering if anyone else was having the same problem, and I was about to post something about it. These images are so blurry, I can barely see the blood vessels, much less the stalls. These things look like driving through a thick fog at night, I can barely see anything. I’m pretty much just guessing at this point.

I think we need some war paint to tackle this batch.

2 Likes

Hi @caprarom, @MikeLandau & @gcalkins!

Thanks again for the feedback that you gave on the new dataset already & your help :wink:

After doing some more annotations, perhaps you get a feeling of what proportion of the movies are impossible to annotate ?

Best,
Egle

This response is premature as I’m still adjusting to / learning from this new dataset. But, rather than ignore you, Egle, my impressions at this point are:

Guy nailed the main issues with his comments.
Of course, it’s ‘possible’ to annotate any of the movies, but the confidence in doing so varies greatly. What proportion of the movies are strictly a best (or wild) guess? - I’d say maybe 5-8% or so; that could improve with experience. Newer users might struggle with 20% or more.
So far, I’ve only flagged a couple movies. Both seemed to lack target vessels within the outline.
There are a few movies where the whole field, not just the target area, seems frozen. But then upon closer examination, I might see a faint trace (possible vessel just beyond focal range) moving through the oval. Not yet sure what to do with those.
Regarding multiple target vessels in the same view, if they are all stalled or flowing, no big deal. If four are stalled and one is flowing, I’ll marked “stalled” but leave a comment. If it’s two and two or three & three? - I’m open to instruction, boss. :wink:

Enough of premature comments. Will report back later with any further insights, if any. (BTW, Mike, at least it’s not as scary as driving at night through that thick fog.)

1 Like

Hi Egle -
Here is a quick example to highlight Mike’s point and the challenge we face in providing useful input.

Here are four images of the same movie and of the same outine just at different points in time. I’ve captured the range of frames that tend to highlight the vessel shown.

Frames 1-15 …Frames 17-27… Frames 24-56…Frames 35-50

The vessel in the first image is stalled.
The large vessel in the second image is flowing.
The smaller vessels to the right side of the third image are stalled.
The vessel in the fourth image is flowing.

None of the vessels is really targeted or fits the outline and depending on which one you decide is the target will depend on the answer you give. It’s not clear how these annotations will yield any type of concensus response or how experts may use this input.

Given the high density of vessels in many of the outlines we are facing this type of challenge in a significant number of annotations. I concur with Mike that some clarification of which choice would be of most use would be helpful.

Guy

2 Likes

Lol!.. Love the war paint. Let me know if it helps… - Guy

1 Like

Egle @seplute
In response to your question about proportions that are impossible to annotate, here is my quick breakdown of 100 random real movies.

45% OK to annotate. Reasonable outline of target vessel and clear enough to make a reasonable determination.

29% Too many different vessels that could be considered to be the target. The movies are too long or penetrate through too many layers of vessels. Multiple vessels could be considered as the targets throughout the length of the movie. As Mike indicated, many times some are stalled and others are flowing which creates a real dilemma. I’ve been marking the stalled vessels to bring these to the experts’ attention, but this has increased my overall percentage of stalls.

21% Outline does not sufficiently target a single vessel. Different vessels may cross at oblique angles to the outline, but none could be considered as a reasonable target for annotating based on the outline. I suspect that the lack of clarity in the movies and the depth of field is throwing off the outlining algorithm. Again, I am defaulting to marking any stalls seen hear if I can’t locate an obvious target vessel.

5% Too blurry to make a legitimate call on flowing versus stalled. This is even using a laptop. I would suspect that users with tablets or other handheld devices would rate this percentage much higher.

Hope this helps quantify our challenge,
Guy @gcalkins

Here are three examples of the most challenging types of movies. When the outlines fall within these dense areas you either get multiple vessels that could be considered targeted as you play/scan the movie or no particular vessel that seems to fit the outline. Players are left guessing as to which vessel to annotate.

Not sure how many different outlines are associated with each of these specific movies, but these are the most challenging. Maybe we need an “Advanced Player” section that doesn’t expect beginning players to deal with these.

I really don’t have anything to add to what Michael and Guy have said except to say that the problem for me is that on every single one of these blurry movies, whether I answer flowing, or stalled, the reply is always maybe, so in terms of being diagnostic for your research, I really can’t say what that means. I guess the problem for you is that these are real movies, and not calibration movies like the others were, so you really should not just throw them out. The way your system works, maybe you need to leave them in for for a little bit longer while you try to determine whether or not the crowd can come to a consensus one way or the other. Michael, and Guy have done such a good job with pointing out specific problems that I really have nothing further to add. They are really much better than I am when it comes to pointing out fine details.

Hi @caprarom!

Just want to say, I think war paint was an excellent strategy :+1:

All best,
@pietro

Dear Supercatchers,

What would we do without you!

Your difficulty analysis is very helpful and I think it corroborates my own breakdown as well as that of @seplute. There seems to be a higher than normal density of vessels and more than normal visual noise. In some cases, it is simply impossible to determine which, if any, vessel is designated by an outline. This is definitely going to pose some challenges, but this is the nature of the data. As I said, we can’t say too much about the dataset now, or we might introduce response bias. In any case. thanks for help in cleansing this set and I am optimistic that the crowd results will speak to a new and exciting answer to an important research question, which will be revealed as soon as the crowd + lab analysis is complete.

Thanks for pushing through this challenging experience!

All the best,
Pietro

Some supercatchers may begin to notice that we are starting to repeat annotations of this challenging dataset. Woohooooo!!! The end is in sight. I certainly hope the next dataset is a little more normal. So far, I have not experienced any of the “errors” that sometimes appear when we are nearing the end of a dataset. Just be aware that any issues you might start to experience could be a result of this situation. Given the challenges of this dataset, we may need to extend our annotating a little more than usual to get to consensus. Everyone should keep going until EyesOnAlz gives us new data.

Pietro - What is the status for new data? I know you were hoping to begin some automated uploading in the near future. We have not seen any new vessels lately and are starting to repeat.

Any new news would be welcomed.

Thanks,
Guy

1 Like

Yay, I have just started on the new data set! It is much more normal than the last dataset :relieved:

2 Likes