Svante Bengtson

Freestyle Score Calculations

A freestyle routine is evaluated in two main dimensions: difficulty and presentation. These aspects are separate to encourage athletes to create routines that push the boundaries of the skills in the sport, while also making the routines entertaining to watch for audiences inside and outside of our sport. We have had other posts on how these aspects of scores are calculated, and the goals for each. This post talks about how these scores are combined to provide proper weighting for each aspect. We’ll avoid the math as much as possible, but ultimately that’s how this all ends up getting done.

The IJRU Technical Congress believes that both aspects are important to a routine. That said, as many of you have commented, difficulty is the foundation of a routine. Our sport is built on a history of innovation and raising the bar and much of this is based on content. Since we are creating rules for competitive sport, it’s important to use the technical aspects of the routine as the foundation.

But that does not mean presentation is not important. For the growth of our sport, it is essential that our sport is accessible and attractive to those that know nothing about jump rope/rope skipping. Everyone in the sport today can tell the story of the first time they saw a jump rope/rope skipping show. Dry, engineered, technical routines are usually not memorable and do not attract people into the sport. The entertainment value of a routine is what creates a great first impression.

Incoming Rule Sets 

Both FISAC and WJRF weighed difficult and presentation scoring, but in different ways.

FISAC, in its most recent rule set, used a ranking system in which all competitors in an event were ranked separately in difficulty and presentation. Then these ranks were used to pick the winner. The advantage of this approach is that it eliminated the need to create a balanced point system that weighed difficulty and presentation in the right proportion. The order of the results is all that mattered. The drawback of this approach is that it didn’t take into account the differences in scores between these ranks. For example, if two athletes ranked closely in presentation but had a large difference in difficulty, it may seem logical that this bigger difference in difficulty would give that athlete more credit, but under this system, being one point ahead is the same as being a hundred points ahead. While this is an extreme example, this approach didn’t seem ideal to us.

In the incoming WJRF rule set, the scoring calculations factor in target values for difficulty and presentation scores. This forces WJRF to create target values that will weigh both aspects in the desired proportion. Unfortunately, this is difficult to do for all levels of the sport and that bar moves as the sport advances. At lower levels of the sport, athletes do better with presentation but their content scores tend to be low, so presentation scores end up being weighed much more strongly. At the high end of the sport, very dense and difficult routines created high content scores that made presentation a very small factor. In some cases, athletes are incentivized to focus less on presentation and spend that time adding more difficult skills to their routines.

The Goal 

We believe that an ideal system for combining content and presentation will: 

  • Work for athletes competing in all levels of the sport.

  • Use difficulty as the basis for scoring – at high levels of the sport, you can’t win without a hard routine.

  • Reward for great presentation, and mark down for weak presentation. 

  • Give presentation sufficient importance. Since we are using presentation to handle non-ideal aspects like repetitive skills, the presentation score has to have power to make major shifts in ranking at all levels of the sport (presentation must have ‘teeth’!).

Approach 

We have moved to a calculation system that uses the difficulty score as the basis, then uses the presentation score to make a plus/minus adjustment within a percentage range. For example, an athlete that receives a content sore of 100, but a -40% adjustment based on weak presentation. That athlete would get a score of 60. Similarly, an athlete with a base difficulty score of 50 could get a +30% boost from presentation taking their final score to 65.

IRJU Scoring calc.jpg

We are still working on the exact range and values, but at the beginning of a routine, an athlete will start with a 0% presentation adjustment, and through positive and negative marks on various elements from the presentation judges, that adjustment can end up a positive or negative percentage value.

The benefit of this approach is that it meets our goal of a formula that works at all levels of the sport. Since the adjustment is a percentage, it works well on easy routines and hard ones. It uses difficulty as the base, so even with an extremely entertaining routine, there’s a hard upper limit no matter how well an easy routine can score.

Conclusion 

This approach also gives presentation weight at all levels. A championship routine with great skills can be surpassed by a slightly easier routine with great presentation.  

Compared to the WJRF system, this is more consistent at all levels of the sport and will not need to be adjusted as much as difficulty increases.  

Compared to the FISAC system, this rewards athletes who are well above other athletes in difficulty and presentation while retaining a balance between both aspects.

We are still evaluating different approaches to deductions (required elements and misses)

Community Commentary

Winter festivities are around the corner and December is a stressful month. We have read all your feedback and discussed it, but unfortunately we haven’t had time to write responses yet. To make sure we give you the best responses possible we will publish this weeks community commentary separately.

Survey not loading? Click here

Speed Signals Sounds and Call-Outs

This week on the blog, we will discuss the speed timing tracks. We have identified some key differences and issues that we are attempting to resolve with clearer definitions than FISAC-IRSF and WJRF had respectively. And this week, we’d like to guide you through our suggestion for new time tracks, part by part.

Start

This is the part where the event is presented to the athletes and where the athletes prepare for the event.

The Technical Congress have identified one key problem with the FISAC-IRSF and WJRF time tracks where a simple merge cannot occur. FISAC-IRSF uses "skippers ready" and WJRF uses "jumpers ready" when preparing the athletes for the start. IJRU's standpoint is that "Skipping/Jumping" (Rope Skipping/Jump Rope") should always be used in an official context, but we all agree that "jumpers slash skippers ready" sounds flat out bad in a “ready, set, go” call-out. Because of this, the Technical Congress proposes the wording "Athletes ready" to provide a neutral solution.

Therefore, all speed time tracks should start as follows:

"<Event Name> <Event Time> <2.000 seconds silence> Judges Ready? <0.500 seconds silence> Athletes Reday? <0.500 seconds silence> Set <0.500 seconds silence> <0.350 seconds BEEP>"

Where “<Event Time>” is defined as "[<N> times] <Time> seconds" where [<N> times] is only required if the event is performed in a relay fashion. (For example: “four times thirty seconds” or “one hundred eighty seconds”) All time definitions in the event presentation come in seconds. We wanted to add the time definition of the event to the start of the timing track to make it easier to confirm that the correct timing track is being played.

speed-start-timeline.jpg

We also identified a problem where athletes sometimes misinterpret switch signals in FISAC-IRSF as a stop signals, because they sound exactly alike. However, to make the switch and stop signals as audible as possible, and as easy as possible to distinguish from the time call-outs we want to use beeps instead of spoken signals. Therefore, we want to use different frequencies for the start/stop signal and the switch signal.

As a baseline we averaged the frequencies of the sound FISAC-IRSF uses (a square wave with a frequency of 694 Hz) with the sound WJRF uses (a square wave with a frequency of 463 Hz). We then ended up at a square wave of 578.5 Hz, which conveniently enough is really close to the tone D5 (578.3 Hz) in standard tuning (A = 440 Hz).

To separate the start/stop beeps from the switch beeps we simply moved one whole step down the chromatic scale to a C5 (525.3 Hz) for the switch beeps.

Both the FISAC-IRSF and WJRF beeps are approximately 350 ms long, and we don’t see any reason to change that.

If we evaluate where those frequencies land in a Fletcher-Munson Curve, which approximates the loudness a normal-hearing person can perceive different frequencies, we can draw the conclusion that the IJRU sound should be well within an easily audible range. Below we’ve marked (from left to right) the FISAC-IRSF beep sound, the IJRU beep sound and the WJRF beep sound.

“ Approximate equal loudness curves derived from Fletcher and Munson (1933) plus modern sources for frequencies &gt; 16kHz. The absolute threshold of hearing and threshold of pain curves are marked in red. Subsequent researchers refined these readings, culminating in the Phon scale and the ISO 226 standard equal loudness curves. Modern data indicates that the ear is significantly less sensitive to low frequencies than Fletcher and Munson's results. ” Image and description from  this fantastic article by Monty from xiph   Green vertical lines added compared to original image, used with permission (C) Copyright 2012 Red Hat Inc. and Xiph.Org

Approximate equal loudness curves derived from Fletcher and Munson (1933) plus modern sources for frequencies > 16kHz. The absolute threshold of hearing and threshold of pain curves are marked in red. Subsequent researchers refined these readings, culminating in the Phon scale and the ISO 226 standard equal loudness curves. Modern data indicates that the ear is significantly less sensitive to low frequencies than Fletcher and Munson's results.” Image and description from this fantastic article by Monty from xiph

Green vertical lines added compared to original image, used with permission (C) Copyright 2012 Red Hat Inc. and Xiph.Org

While looking at the image above it might seem like the “optimal” frequency would be around 3500 Hz (approx. a G#7) but that doesn’t sound as pleasing. 1000 Hz was, however, a promising candidate for a beep sound.

We also reviewed a bunch of videos from competitions and speed events and what sounds were heard during those. Interestingly, we could see that there’s relatively little sound around 400-600 Hz where we’ve suggested placing the beep sounds, which should once again make them as audible as possible. We also recognized that when the audience began cheering at the end of the events the frequencies around 1000 Hz got very crowded, and that could in turn make a 1000 Hz beep less audible. Therefore, we decided to keep the beeps with lower frequencies to make them a bit more audible.

Sound spectrogram of  this  speed event from the FISAC-IRSF Championships in Sweden 2016. X-axis is time, y-axis is frequency. more red means more sound in that frequency at that time, blue represents less.

Sound spectrogram of this speed event from the FISAC-IRSF Championships in Sweden 2016. X-axis is time, y-axis is frequency. more red means more sound in that frequency at that time, blue represents less.

We had discussions about replacing the spoken “judges ready? athletes ready? set.” part with a series of beep sounds, like in alpine skiing, however, we felt that a spoken time track does feel less robotic and thus a bit more welcoming.

We also discussed randomizing the time between the “set” and the start beep every time the event occurs, like in a track sprint event, for example. However, we’ve decided that we’re more interested in testing the athlete’s speed than testing the athlete’s reaction time, and we believe that randomizing the start time is not beneficial.

Time Call-outs

To distinguish time call outs from start, stop, and switch signals, we are proposing keeping those as spoken call-outs; and to avoid unnecessary wordiness we have omitted the word “seconds” in those call-outs.

We have tried to derive a formula for time call-outs, like we’ve done with the start of the time tracks. Best described with the following table.

We did this so that if events are added in the future, or if you run a local competition with different speed events, it should be easy to know exactly how the time track for that event should sound.

Athlete compete time
Less than or
equal to 1 minute
More than 1 minute Soft Limit
Event
duration
Less than or
equal to 1 minute
Every 10 seconds
More than 1 minute Every 15 seconds Every 30 seconds
and last 15 seconds
No Limit At soft limit

For triple unders and the call-outs would be “15”.

For all the IJRU team speed events the call-outs would be “15” and for Double Dutch Speed Sprint on “15, 30, 45”.

For Single Rope Speed Sprint the call-outs would be “10, 20”.

For Single Rope Speed Endurance the call-outs would be “30, 1 minute, 30, 2 minutes, 30, 45”.

In addition to these rules, we are proposing that any events this table doesn’t cover, that are shorter than or equal to a minute, should have call outs after half the event, rounded to the closest 5 seconds. If each athlete’s section is longer than a minute, call outs should be made every 30 seconds, and once halfway between the last “30” or “X minute(s)” call-out and the end beep, rounded to the closest 5’th second.

Examples

Note that these examples are NOT the final or official time tracks. Although, they are accurate in timing, some different accentuations might be made in the final versions, and a different voice might be used.

Community Commentary

We are still working through your feedback from last week, and will resume the community commentary feature next week.

Please take a moment to reply to the survey on last week’s post about Presentation Judging! We really want your input!

Also, don’t forget to sign up for our newsletter to get notified when we publish a new blog topic!

Until next week,
The IJRU Technical Congress

Survey doesn’t load? Click here