A few reactions to the newest dataset uploaded today. (But first a thank you for new data.)
This is a very challenging set of movies.
1.) The focus seems to be off a little more than normal. The vessels seem a little more blurred.
2.) I am having more difficulty identifying a single targeted vessel. Many movies seem to have multiple vessels that could be targeted by the outline. This will create much more variability in the flowing versus stalled responses since we will each be assuming a different vessel as the target.
3.) If you are a new player, do the best you can. Hopefully, the next dataset will get back to normal and be easier to analyze.
ā¦ me too, Guy. I was also concerned for new players. This data set is humbling. Itāll sure keep me from getting cocky. Time to take a break and do some painting. Best of luck!
Mike
I was wondering if anyone else was having the same problem, and I was about to post something about it. These images are so blurry, I can barely see the blood vessels, much less the stalls. These things look like driving through a thick fog at night, I can barely see anything. Iām pretty much just guessing at this point.
This response is premature as Iām still adjusting to / learning from this new dataset. But, rather than ignore you, Egle, my impressions at this point are:
Guy nailed the main issues with his comments.
Of course, itās āpossibleā to annotate any of the movies, but the confidence in doing so varies greatly. What proportion of the movies are strictly a best (or wild) guess? - Iād say maybe 5-8% or so; that could improve with experience. Newer users might struggle with 20% or more.
So far, Iāve only flagged a couple movies. Both seemed to lack target vessels within the outline.
There are a few movies where the whole field, not just the target area, seems frozen. But then upon closer examination, I might see a faint trace (possible vessel just beyond focal range) moving through the oval. Not yet sure what to do with those.
Regarding multiple target vessels in the same view, if they are all stalled or flowing, no big deal. If four are stalled and one is flowing, Iāll marked āstalledā but leave a comment. If itās two and two or three & three? - Iām open to instruction, boss.
Enough of premature comments. Will report back later with any further insights, if any. (BTW, Mike, at least itās not as scary as driving at night through that thick fog.)
Hi Egle -
Here is a quick example to highlight Mikeās point and the challenge we face in providing useful input.
Here are four images of the same movie and of the same outine just at different points in time. Iāve captured the range of frames that tend to highlight the vessel shown.
The vessel in the first image is stalled.
The large vessel in the second image is flowing.
The smaller vessels to the right side of the third image are stalled.
The vessel in the fourth image is flowing.
None of the vessels is really targeted or fits the outline and depending on which one you decide is the target will depend on the answer you give. Itās not clear how these annotations will yield any type of concensus response or how experts may use this input.
Given the high density of vessels in many of the outlines we are facing this type of challenge in a significant number of annotations. I concur with Mike that some clarification of which choice would be of most use would be helpful.
Egle @seplute
In response to your question about proportions that are impossible to annotate, here is my quick breakdown of 100 random real movies.
45% OK to annotate. Reasonable outline of target vessel and clear enough to make a reasonable determination.
29% Too many different vessels that could be considered to be the target. The movies are too long or penetrate through too many layers of vessels. Multiple vessels could be considered as the targets throughout the length of the movie. As Mike indicated, many times some are stalled and others are flowing which creates a real dilemma. Iāve been marking the stalled vessels to bring these to the expertsā attention, but this has increased my overall percentage of stalls.
21% Outline does not sufficiently target a single vessel. Different vessels may cross at oblique angles to the outline, but none could be considered as a reasonable target for annotating based on the outline. I suspect that the lack of clarity in the movies and the depth of field is throwing off the outlining algorithm. Again, I am defaulting to marking any stalls seen hear if I canāt locate an obvious target vessel.
5% Too blurry to make a legitimate call on flowing versus stalled. This is even using a laptop. I would suspect that users with tablets or other handheld devices would rate this percentage much higher.
Hope this helps quantify our challenge,
Guy @gcalkins
Here are three examples of the most challenging types of movies. When the outlines fall within these dense areas you either get multiple vessels that could be considered targeted as you play/scan the movie or no particular vessel that seems to fit the outline. Players are left guessing as to which vessel to annotate.
Not sure how many different outlines are associated with each of these specific movies, but these are the most challenging. Maybe we need an āAdvanced Playerā section that doesnāt expect beginning players to deal with these.
I really donāt have anything to add to what Michael and Guy have said except to say that the problem for me is that on every single one of these blurry movies, whether I answer flowing, or stalled, the reply is always maybe, so in terms of being diagnostic for your research, I really canāt say what that means. I guess the problem for you is that these are real movies, and not calibration movies like the others were, so you really should not just throw them out. The way your system works, maybe you need to leave them in for for a little bit longer while you try to determine whether or not the crowd can come to a consensus one way or the other. Michael, and Guy have done such a good job with pointing out specific problems that I really have nothing further to add. They are really much better than I am when it comes to pointing out fine details.
Your difficulty analysis is very helpful and I think it corroborates my own breakdown as well as that of @seplute. There seems to be a higher than normal density of vessels and more than normal visual noise. In some cases, it is simply impossible to determine which, if any, vessel is designated by an outline. This is definitely going to pose some challenges, but this is the nature of the data. As I said, we canāt say too much about the dataset now, or we might introduce response bias. In any case. thanks for help in cleansing this set and I am optimistic that the crowd results will speak to a new and exciting answer to an important research question, which will be revealed as soon as the crowd + lab analysis is complete.
Thanks for pushing through this challenging experience!
Some supercatchers may begin to notice that we are starting to repeat annotations of this challenging dataset. Woohooooo!!! The end is in sight. I certainly hope the next dataset is a little more normal. So far, I have not experienced any of the āerrorsā that sometimes appear when we are nearing the end of a dataset. Just be aware that any issues you might start to experience could be a result of this situation. Given the challenges of this dataset, we may need to extend our annotating a little more than usual to get to consensus. Everyone should keep going until EyesOnAlz gives us new data.
Pietro - What is the status for new data? I know you were hoping to begin some automated uploading in the near future. We have not seen any new vessels lately and are starting to repeat.