Review Algorithm/1.4.4

When you publish a new game, a notification appears 3 ticks (little squares next to the week) after to tell you that first reviews are out.

These reviews have a huge impact on sales, thus the need to maximize your score.

This will explain how the review score is calculated. The article will follow the logical progression in the code of the game to allow for better comprehension if you read the code at the same time.

Only one score is generated for a game. Then an artificial transformation is applied to get the four scores you can see on the review screen

You can know your exact game/review scores by using Game Flags to enable the console, which will give you useful messages throughout your playthrough.

Necessary data tables will be in the article Raw Data

Perfect score and Technical expertise
For each development focus that has an importance of ≥ 0.9 for the genre of the game, count the number of experts (in the corresponding dev focus) assigned. Let this number be ec. The first thing the game does is to check whether it is possible for the game to get a perfect score. If ec >= 2, a perfect score of 10/10/10/10 is '''possible. Otherwise (in particular if the game is of small size, you cannot assign staff to single pieces of the game) the maximum possible score will always''' be 10/10/10/9.

Let tl be the tech level defined as average of Tech levels of all Graphic features in custom engine (Raw Data : Tech Level) and Tech Level of used platform (Tech Levels of platforms). The technical expertise factor is computed as follows:

Examples and Consequences

 * Only experts determine whether a perfect score is possible, for all games sizes (except small, for which you cannot assign individual developers).
 * The tech level has an effect on technical expertise only for large and AAA games but it always has an effect on calculating base points of the game which effects the amount of tech and design points of the game and so the score too. For large games, a tech level greater than 3 and for AAA games a tech level greater than 5 has no additional effect on technical expertise. Moreover, for AAA games you need at least tech level 3 to increase technical expertise (note that for AAA games, k is indeed calculated with 5 as a base level but clamped to 3).
 * Experts have an effect on the technical expertise factor only for AAA games.

Calculating Target Game Score
To calculate the game's review score, we compare it to a "Target Game Score" and if we achieve that, we get maximum possible review scores (determined by technical expertise factor), and if not, we get a portion of that based on how close we came to it. Three values are used to calculate the Target Game Score:

Let's abbreviate target game score as TGS, top score as TS, top score delta as TSD and year modifier as Y.
 * Top Score is the highest game score (g, above) ever achieved (updated only when the review score is >= 9)
 * Top Score Delta is (usually) the value by which Top Score was increased last time it was updated
 * Year modifier is a flat constant based on current in-game year

Y, the year modifier, can be defined as follows: Then we can simply calculate TGS as: TGS = TS + TSD * Y, or 20 in case we have not set a Top Score yet. This will be the value that your game score will be compared against. However, the way TSD is calculated is a bit tricky and needs to be explained in order for you to know your Target Game Score precisely.

Updating Top Score Delta and Top Score
When you achieve a >=9 review score, the counter of achieved top scores is increased by 1 (used in post processing), and if your game score is bigger than your Top Score (or you have not set a Top Score yet), Top Score and Top Score Delta are updated as follows:

If you have not set a Top Score yet: If you have a Top Score: What this means is that in order to know your target game score, you need to keep track of game scores of all your top score games since the first one. Every time you score a new top score, you have to do these calculation and write down your TSD and TS for your next game
 * TSD = Whatever is greater between (g - 20) and 2
 * TS = g
 * TSD = Whatever is greater between (g - TS) and (0.1 * TGS)
 * If we are not in garage, TSD is capped by 0.2 * TS
 * meaning if TSD > 0.2 * TS then TSD = 0.2 * TS
 * TS = g

If at any point in time (after you moved from garage) you only know your last game score and that it has scored a 9 or more, you can only be sure that your target game score is not bigger than 123%/124%/122% of your last game's score (based on year).

If you know two consequent game scores that scored 9 or more, you can approximate it better by assuming it is equal to whichever is greater between (112,3%/112,4%/112,2% of your first previous game's score, based on year) and (first previous game's score + difference between first and second previous game's score, multiplied by year modifier), but not bigger than 123%/124%/122% of your first previous game's score.

Note that what is updated here is used for your next game, because it is updated after target game score for the current game has already been read.

Review Score calculation
The first step is to compute a factor that intervenes later in the process :

Quality factor calculation

 * q (quality factor) is initially q=1
 * MMOMOD is initially MMOMOD = 1 if your game is not an MMO, MMOMOD = 2 if it is

Tech/Design balance (Only taken into account if Tech + Design >= 30)

 * Tech is the number in the blue circle, Design the number in the light orange circle.


 * Let R be the optimal ratio for the selected genre in Raw Data : Tech/Design ratio, T and D the Technology and Design scores respectively

t = (D * R - T)/max(T,D)


 * If |t| <= 0.25 : q increased by 0.1, and a "good balance" message is added to the queue
 * If |t| > 0.50 : q decreased by 0.1, and a "bad balance" message is added to the queue
 * Other values : nothing happens
 * Example:
 * Adventure game (R = 0.4) finished with D=50 T=20
 * t = (50 * 0.4 - 20) / 50 = 0 - q increased by 0.1, good balance message added
 * Action game (R = 1.8) finished with D=50 T=20
 * t = (50 * 1.8 - 20) / 50 = 1.4 - q decreased by 0.1, bad balance message added

Design decisions

 * Relevant numerical values are in Raw Data : Development Focus


 * If you spent more than 40% of a phase (on the lower "time allocation bar" not 40% of a slider) on a design goal >= 0.9 :
 * 2+ times : q increased by 0.2, and message "focus on X served game well" is added to the queue
 * 1 time : q increased by 0.1, no message
 * never : q decreased by 0.15 * MMOMOD, no message
 * If you spent more than 40% of a phase on a goal < 0.8 :
 * Twice : q decreased by 0.2 * MMOMOD, and message "focus on X is a bit odd"
 * Once : q decreased by 0.1 * MMOMOD, no message
 * If you spent less than 20% of a phase on a goal >= 0.9 :
 * q decreased by 0.15 * MMOMOD, and message "shouldn't forget about X" for each time it happened

Combination study

 * In the case of multigenre titles the value is always 2/3 * Genre 1 + 1/3 * Genre 2

Topic/Genre

 * See numerical values at Raw Data : Topic Genre Combinations


 * <= 0.6 : "Strange combination" message added to the queue
 * = 1 : "Great combination" message added to the queue
 * Note : at this stage no penalty is applied to the score for fitting/unfitting combinations.
 * Note 2:  "Great combination" increases Experience bonus by +0.2

Platform/Genre

 * See numerical values at Raw Data : Platform Genre Combinations


 * <= 0.6 : "Genre does not work well on Platform" message added to the queue
 * = 1 : "Genre works well on Platform" message added to the queue

Topic/Audience

 * See numerical values at Raw Data: Topic Audience Combinations


 * <= 0.6 : "Theme is an horrible theme for Audience" message added to the queue

Note: As of V 1.44 a penalty for unfitting combinations IS applied:

After calculationg the score/TGS ratio it is calculated as follows: Consequences:
 * Be u the ratio of score/TGS (in range 0-1 --> 0.5 means score of 5)
 * if u>=0.6 AND either Topic/Genre or Topic/Audience factor<=0.7 --> u = 0.6 + (u - 0.6) / 2; So you loose half your score above 6
 * if u>=0.7 then recalculate u for each platform with Platform/Genre value<=0.8: score*=Platform&Genre value; u=score/TGS


 * If score is below 6 nothing happens.
 * No double penalty for unfitting Topic/Genre/Audience/Platform
 * Note: Other score modifiers are applied after that, so the score is still not the final score

Various checks

 * If the combination of Topic/Genre/Second Genre is the exact same as the previous released title, a penalty of -0.4 is applied to q
 * If the game is a sequel (or an expansion) to a game released less than 40 weeks ago, a penalty of -0.4 is applied to q
 * If the game is a sequel (not an expansion) and uses the same engine as the previous game in the series, a penalty of -0.1 is applied to q
 * If the game is a sequel and uses a better engine than the previous game, a bonus of +0.2 is applied to q
 * If the game is an MMO and the Topic/Genre  combination is <1 then 0.15 is deduced from q

Bug ratio calculation (only if there are bugs ie bugs > 0, else the ratio is 1)
r = 1 - (0.8 * [ (# of bugs) * 100 / (Tech + Design) ] / 100) in which the [] means that " (# of bugs) * 100 / (Tech + Design)"  is clamped between 0 and 100
 * If r <= 0.6 a "Riddled with bugs" message is added to the queue
 * If r < 0.9 a "Too many bugs" message is added to the queue

Trend factor calculation (only if there is a trend going, else factor is 1)
There are 4 types of trend: genre, new topic, audience and "strange combos".

If trend type is Genre / "New Topic" / Audience.


 * If you hit the trend, t = 1.2.

If trend type is "Strange Combos", t depends on the Topic/Genre value (from raw data). Yes, you are penalized for not following the "Strange Combos" trend.
 * If you don't hit the trend, t = 1.
 * If value is 1 (great combo), t = 0.85
 * If value is 0.9, t = 1.1
 * If value is 0.8, t = 1.2
 * If value is <0.8, t = 1.4

Hint: To hit a strange combo and get max Trend factor you can make MultiGenre game that has any other  Topic/Genre value  than exactly 1, 0.9 or 0.8. So for example Strategy / Action Aliens Game has Topic/Genre value 0.93 and still it gets Trend factor 1.4 for Strange Combo.

Note :  If Trend Factor > 1 then Experience bonus is  increased  by +0.2

At this stage, the first meaningful score is generated :

 * m = (Design + Tech) / (2 * Multiplier 2) (In Raw Data : Size Constants)
 * q: quality factor (calculated above)
 * p : Platform / Genre value
 * o : Topic / Audience value
 * r : bug ratio (calculated above)
 * t : trend factor (calculated above)
 * w: platform tech difference (For multi platform games. 1 Otherwhise)

g = m * q * p * o * r * t For V 1.44: g = (m + m * q) * p * o * r * w * t g is the game score, which will be used to update your top score and top score increase for the next game, if the conditions set in Review_Algorithm#Post-Treatment are met.

Calculating first review score:
Then, for the rest of the process we will use the review score : S = [10 * g / tgs] * x / 10
 * tgs: Target game score (as explained in the first paragraphs)
 * x : Technical expertise factor
 * [A] means A put back between 1 and 10 : if A > 10 then [A] = 10, if A < 1 then [A] = 1, all other cases [A] = A

First Pass

 * The first pass of post-treatment only happens if S >= 9 and one of the following two conditions is met :


 * If the quality factor q is < 1.1 there is a 80% likelihood that it happens
 * (Probably bugged, read below) If staff member has contributed more than 0.2 and less than 2 games in his carreer, and has been assigned to at least one field in current game (meaning his load% for current game is >0, which also means this condition cannot happen in small games) it always happens.
 * It then generates a random number :


 * 75 % chance that the score becomes a random value in range [8.45 ; 9.1]
 * 25% chance that score becomes a random value in range [9 ; 9.25]

About the "contribution":

Experiments with the game prove that the "contribution" value of an employee is increased by ~1.02 for every employee after every game is complete, regardless of game size. Therefore, the second condition of the First Pass is probably bugged.

Judging from the code and from the application of common sense, it is supposed to count the percentage of load employee has taken over his career, and after your started putting your employee on "important roles" (meaning you assign him to development fields), then until your employee has done 200% total on medium+ games, he will be considered "unskilled" and lower your final score.

However, instead it works like this: For the first time the employee does a game for you, right before the review score is calculated, his "contributed" value is set to around ~1.02, and he will trigger this condition and then for the second time, he will no longer do it since he will have this value at ~2.04. Now, what this means that the employee will only screw up one game for you.

Also notice, this ~1.02 value is for a fully "effective" employee. If your employee is not at his full effectiveness (demands a vacation or is freshly hired) during development he will receive less than that, and probably less than 1. In that case, his first AND second games will have lower scores.

Also notice, this increase in contribution only happens during reviewing. Therefore, you cannot "practice" with an employee by trashing his first games - trashed game does not increase contribution, only experience.

Second Pass

 * The second pass of post-treatment happens only before Year 4, if the player has published less than 2 games that count as a high score (conditions in this paragraph). Let S be the score after the optional first pass.


 * If S = 10 then the score becomes a random number in range [8.5 ; 8.95], but will still count as a high score
 * If 10 > S >= 9 then the score is decreased by a random number in range [1.05 ; 1.25], but will still count as a high score
 * If 9 > S > 8.5 then the score is decreased by a random number in range [0.4 ; 0.6], and will not count as a high score
 * If the score is still >= 9 then it is counted as a high score


 * If the score is a high score and it is exactly the third one, it is arbitrarily set to 10


 * Congratulations ! you now know your real review score


 * However there is a final step to get the review screen :

Review Screen
The "real" score is first rounded down.

Then, each reviewer's score and message is generated as such :
 * 1) Determine score inflation
 * 2) *A random value is added to the score. it can be :
 * 3) **0 : 50% chance
 * 4) **1 : 25 % chance
 * 5) **-1 : 25 % chance
 * 6) *However, if the score is 5 or 6, the following values are possible :
 * 7) **0 : 50% chance
 * 8) **1 : 23.75%
 * 9) **-1 : 23.75 %
 * 10) **2 : 1.25 %
 * 11) **-2 : 1.25 %
 * 12) *Score is then clamped between 1 and 10
 * 13) If we are generating the fourth score and the three previous ones were 10 and this one is supposed to be 10 too, then if a perfect score cannot be reached (see Perfect score and Technical expertise), this score becomes a 9.
 * 14) Generate messages : there are three message queues : good ones, bad ones (generated during score calculation) and standard ones
 * 15) *If score > 2 : some chance of picking a good message, some chance of picking standard
 * 16) *if score < 6 : pick a bad message or a standard one
 * 17) *Else pick a standard message
 * 18) *If either good or bad is empty when trying to choose a message, also pick a standard message



