It is shown that under certain circumstances in particular for small data sets, the recently proposed citation impact indicators I3(6PR) and R(6,k) behave inconsistently when additional papers or citations are taken into consideration. Three simple examples are presented, in which the indicators fluctuate strongly and the ranking of scientists in the evaluated group is sometimes completely mixed up by minor changes in the database. The erratic behavior is traced to the specific way in which weights are attributed to the six percentile rank classes, specifically for the tied papers. For 100 percentile rank classes, the effects will be less serious. For the six classes, it is demonstrated that a different way of assigning weights avoids these problems, although the nonlinearity of the weights for the different percentile rank classes can still lead to (much less frequent) changes in the ranking. This behavior is not undesired because it can be used to correct for differences in citation behavior in different fields. Remaining deviations from the theoretical value R(6,k) = 1.91 can be avoided by a new scoring rule: the fractional scoring. Previously proposed consistency criteria are amended by another property of strict independence at which a performance indicator should aim.