Game theory market entry
As we will see in a later section, those who hope to use game theory to explain strategic reasoning , as opposed to merely strategic behavior , face some special philosophical and practical problems. Since game theory is a technology for formal modeling, we must have a device for thinking of utility maximization in mathematical terms.
Such a device is called a utility function. We will introduce the general idea of a utility function through the special case of an ordinal utility function. Later, we will encounter utility functions that incorporate more information. Suppose that agent x prefers bundle a to bundle b and bundle b to bundle c.
We then map these onto a list of numbers, where the function maps the highest-ranked bundle onto the largest number in the list, the second-highest-ranked bundle onto the next-largest number in the list, and so on, thus:. The only property mapped by this function is order. The magnitudes of the numbers are irrelevant; that is, it must not be inferred that x gets 3 times as much utility from bundle a as she gets from bundle c.
Thus we could represent exactly the same utility function as that above by. The numbers featuring in an ordinal utility function are thus not measuring any quantity of anything.
For the moment, however, we will need only ordinal functions. All situations in which at least one agent can only act to maximize his utility through anticipating either consciously, or just implicitly in his behavior the responses to his actions by one or more other agents is called a game. Agents involved in games are referred to as players. If all agents have optimal actions regardless of what the others do, as in purely parametric situations or conditions of monopoly or perfect competition see Section 1 above we can model this without appeal to game theory; otherwise, we need it.
In literature critical of economics in general, or of the importation of game theory into humanistic disciplines, this kind of rhetoric has increasingly become a magnet for attack. The reader should note that these two uses of one word within the same discipline are technically unconnected. Furthermore, original RPT has been specified over the years by several different sets of axioms for different modeling purposes.
Once we decide to treat rationality as a technical concept, each time we adjust the axioms we effectively modify the concept. Consequently, in any discussion involving economists and philosophers together, we can find ourselves in a situation where different participants use the same word to refer to something different.
For readers new to economics, game theory, decision theory and the philosophy of action, this situation naturally presents a challenge. We might summarize the intuition behind all this as follows: an entity is usefully modeled as an economically rational agent to the extent that it has alternatives, and chooses from amongst these in a way that is motivated, at least more often than not, by what seems best for its purposes.
Economic rationality might in some cases be satisfied by internal computations performed by an agent, and she might or might not be aware of computing or having computed its conditions and implications.
In other cases, economic rationality might simply be embodied in behavioral dispositions built by natural, cultural or market selection. Each player in a game faces a choice among two or more possible strategies. The significance of the italicized phrase here will become clear when we take up some sample games below. A crucial aspect of the specification of a game involves the information that players have when they choose strategies.
A board-game of sequential moves in which both players watch all the action and know the rules in common , such as chess, is an instance of such a game. By contrast, the example of the bridge-crossing game from Section 1 above illustrates a game of imperfect information , since the fugitive must choose a bridge to cross without knowing the bridge at which the pursuer has chosen to wait, and the pursuer similarly makes her decision in ignorance of the choices of her quarry.
The difference between games of perfect and of imperfect information is related to though certainly not identical with! Let us begin by distinguishing between sequential-move and simultaneous-move games in terms of information. It is natural, as a first approximation, to think of sequential-move games as being ones in which players choose their strategies one after the other, and of simultaneous-move games as ones in which players choose their strategies at the same time.
For example, if two competing businesses are both planning marketing campaigns, one might commit to its strategy months before the other does; but if neither knows what the other has committed to or will commit to when they make their decisions, this is a simultaneous-move game. Chess, by contrast, is normally played as a sequential-move game: you see what your opponent has done before choosing your own next action.
Chess can be turned into a simultaneous-move game if the players each call moves on a common board while isolated from one another; but this is a very different game from conventional chess. It was said above that the distinction between sequential-move and simultaneous-move games is not identical to the distinction between perfect-information and imperfect-information games.
Explaining why this is so is a good way of establishing full understanding of both sets of concepts. As simultaneous-move games were characterized in the previous paragraph, it must be true that all simultaneous-move games are games of imperfect information. However, some games may contain mixes of sequential and simultaneous moves. For example, two firms might commit to their marketing strategies independently and in secrecy from one another, but thereafter engage in pricing competition in full view of one another.
If the optimal marketing strategies were partially or wholly dependent on what was expected to happen in the subsequent pricing game, then the two stages would need to be analyzed as a single game, in which a stage of sequential play followed a stage of simultaneous play.
Whole games that involve mixed stages of this sort are games of imperfect information, however temporally staged they might be. Games of perfect information as the name implies denote cases where no moves are simultaneous and where no player ever forgets what has gone before. As previously noted, games of perfect information are the logically simplest sorts of games. This is so because in such games as long as the games are finite, that is, terminate after a known number of actions players and analysts can use a straightforward procedure for predicting outcomes.
A player in such a game chooses her first action by considering each series of responses and counter-responses that will result from each action open to her. She then asks herself which of the available final outcomes brings her the highest utility, and chooses the action that starts the chain leading to this outcome. This process is called backward induction because the reasoning works backwards from eventual outcomes to present choice problems. There will be much more to be said about backward induction and its properties in a later section when we come to discuss equilibrium and equilibrium selection.
For now, it has been described just so we can use it to introduce one of the two types of mathematical objects used to represent games: game trees.
A game tree is an example of what mathematicians call a directed graph. That is, it is a set of connected nodes in which the overall graph has a direction. We can draw trees from the top of the page to the bottom, or from left to right. In the first case, nodes at the top of the page are interpreted as coming earlier in the sequence of actions. In the case of a tree drawn from left to right, leftward nodes are prior in the sequence to rightward ones.
An unlabelled tree has a structure of the following sort:. The point of representing games using trees can best be grasped by visualizing the use of them in supporting backward-induction reasoning. Just imagine the player or analyst beginning at the end of the tree, where outcomes are displayed, and then working backwards from these, looking for sets of strategies that describe paths leading to them. We will present some examples of this interactive path selection, and detailed techniques for reasoning through these examples, after we have described a situation we can use a tree to model.
Trees are used to represent sequential games, because they show the order in which actions are taken by the players. However, games are sometimes represented on matrices rather than trees. This is the second type of mathematical object used to represent games.
For example, it makes sense to display the river-crossing game from Section 1 on a matrix, since in that game both the fugitive and the hunter have just one move each, and each chooses their move in ignorance of what the other has decided to do. Here, then, is part of the matrix:. Thus, for example, the upper left-hand corner above shows that when the fugitive crosses at the safe bridge and the hunter is waiting there, the fugitive gets a payoff of 0 and the hunter gets a payoff of 1.
Whenever the hunter waits at the bridge chosen by the fugitive, the fugitive is shot. These outcomes all deliver the payoff vector 0, 1. You can find them descending diagonally across the matrix above from the upper left-hand corner. Whenever the fugitive chooses the safe bridge but the hunter waits at another, the fugitive gets safely across, yielding the payoff vector 1, 0. These two outcomes are shown in the second two cells of the top row.
All of the other cells are marked, for now , with question marks. The problem here is that if the fugitive crosses at either the rocky bridge or the cobra bridge, he introduces parametric factors into the game.
In these cases, he takes on some risk of getting killed, and so producing the payoff vector 0, 1 , that is independent of anything the hunter does. In general, a strategic-form game could represent any one of several extensive-form games, so a strategic-form game is best thought of as being a set of extensive-form games.
Where order of play is relevant, the extensive form must be specified or your conclusions will be unreliable. The distinctions described above are difficult to fully grasp if all one has to go on are abstract descriptions. Suppose that the police have arrested two people whom they know have committed an armed robbery together.
Unfortunately, they lack enough admissible evidence to get a jury to convict. They do , however, have enough evidence to send each prisoner away for two years for theft of the getaway car. We can represent the problem faced by both of them on a single matrix that captures the way in which their separate choices interact; this is the strategic form of their game:. Each cell of the matrix gives the payoffs to both players for each combination of actions. So, if both players confess then they each get a payoff of 2 5 years in prison each.
This appears in the upper-left cell. If neither of them confess, they each get a payoff of 3 2 years in prison each. This appears as the lower-right cell. This appears in the upper-right cell. The reverse situation, in which Player II confesses and Player I refuses, appears in the lower-left cell. Each player evaluates his or her two possible actions here by comparing their personal payoffs in each column, since this shows you which of their actions is preferable, just to themselves, for each possible action by their partner.
Player II, meanwhile, evaluates her actions by comparing her payoffs down each row, and she comes to exactly the same conclusion that Player I does. Wherever one action for a player is superior to her other actions for each possible action by the opponent, we say that the first action strictly dominates the second one. In the PD, then, confessing strictly dominates refusing for both players.
Both players know this about each other, thus entirely eliminating any temptation to depart from the strictly dominated path. Thus both players will confess, and both will go to prison for 5 years. The players, and analysts, can predict this outcome using a mechanical procedure, known as iterated elimination of strictly dominated strategies. Player 1 can see by examining the matrix that his payoffs in each cell of the top row are higher than his payoffs in each corresponding cell of the bottom row.
Therefore, it can never be utility-maximizing for him to play his bottom-row strategy, viz. Now it is obvious that Player II will not refuse to confess, since her payoff from confessing in the two cells that remain is higher than her payoff from refusing. So, once again, we can delete the one-cell column on the right from the game. We now have only one cell remaining, that corresponding to the outcome brought about by mutual confession. Since the reasoning that led us to delete all other possible outcomes depended at each step only on the premise that both players are economically rational — that is, will choose strategies that lead to higher payoffs over strategies that lead to lower ones—there are strong grounds for viewing joint confession as the solution to the game, the outcome on which its play must converge to the extent that economic rationality correctly models the behavior of the players.
Had we begun by deleting the right-hand column and then deleted the bottom row, we would have arrived at the same solution. One of these respects is that all its rows and columns are either strictly dominated or strictly dominant. In any strategic-form game where this is true, iterated elimination of strictly dominated strategies is guaranteed to yield a unique solution. Later, however, we will see that for many games this condition does not apply, and then our analytic task is less straightforward.
The reader will probably have noticed something disturbing about the outcome of the PD. This is the most important fact about the PD, and its significance for game theory is quite general.
For now, however, let us stay with our use of this particular game to illustrate the difference between strategic and extensive forms. In fact, however, this intuition is misleading and its conclusion is false. If Player I is convinced that his partner will stick to the bargain then he can seize the opportunity to go scot-free by confessing. Of course, he realizes that the same temptation will occur to Player II; but in that case he again wants to make sure he confesses, as this is his only means of avoiding his worst outcome.
But now suppose that the prisoners do not move simultaneously. This is the sort of situation that people who think non-communication important must have in mind. Now Player II will be able to see that Player I has remained steadfast when it comes to her choice, and she need not be concerned about being suckered.
This gives us our opportunity to introduce game-trees and the method of analysis appropriate to them. First, however, here are definitions of some concepts that will be helpful in analyzing game-trees:.
Terminal node : any node which, if reached, ends the game. Each terminal node corresponds to an outcome. Strategy : a program instructing a player which action to take at every node in the tree where she could possibly be called on to make a choice. These quick definitions may not mean very much to you until you follow them being put to use in our analyses of trees below. It will probably be best if you scroll back and forth between them and the examples as we work through them.
Player I is to commit to refusal first, after which Player II will reciprocate when the police ask for her choice.
Each node is numbered 1, 2, 3, … , from top to bottom, for ease of reference in discussion. Here, then, is the tree:. Look first at each of the terminal nodes those along the bottom.
These represent possible outcomes. Each of the structures descending from the nodes 1, 2 and 3 respectively is a subgame. If the subgame descending from node 3 is played, then Player II will face a choice between a payoff of 4 and a payoff of 3. Consult the second number, representing her payoff, in each set at a terminal node descending from node 3.
II earns her higher payoff by playing D. We may therefore replace the entire subgame with an assignment of the payoff 0,4 directly to node 3, since this is the outcome that will be realized if the game reaches that node. Now consider the subgame descending from node 2. Here, II faces a choice between a payoff of 2 and one of 0. She obtains her higher payoff, 2, by playing D. We may therefore assign the payoff 2,2 directly to node 2.
Now we move to the subgame descending from node 1. This subgame is, of course, identical to the whole game; all games are subgames of themselves. Player I now faces a choice between outcomes 2,2 and 0,4. Consulting the first numbers in each of these sets, he sees that he gets his higher payoff—2—by playing D. D is, of course, the option of confessing. So Player I confesses, and then Player II also confesses, yielding the same outcome as in the strategic-form representation.
What has happened here intuitively is that Player I realizes that if he plays C refuse to confess at node 1, then Player II will be able to maximize her utility by suckering him and playing D. On the tree, this happens at node 3. This leaves Player I with a payoff of 0 ten years in prison , which he can avoid only by playing D to begin with. He therefore defects from the agreement. This will often not be true of other games, however. As noted earlier in this section, sometimes we must represent simultaneous moves within games that are otherwise sequential.
We represent such games using the device of information sets. Consider the following tree:. The oval drawn around nodes b and c indicates that they lie within a common information set.
This means that at these nodes players cannot infer back up the path from whence they came; Player II does not know, in choosing her strategy, whether she is at b or c.
But you will recall from earlier in this section that this is just what defines two moves as simultaneous. We can thus see that the method of representing games as trees is entirely general.
If no node after the initial node is alone in an information set on its tree, so that the game has only one subgame itself , then the whole game is one of simultaneous play. If at least one node shares its information set with another, while others are alone, the game involves both simultaneous and sequential play, and so is still a game of imperfect information.
Only if all information sets are inhabited by just one node do we have a game of perfect information. Following the general practice in economics, game theorists refer to the solutions of games as equilibria. Note that, in both physical and economic systems, endogenously stable states might never be directly observed because the systems in question are never isolated from exogenous influences that move and destabilize them.
In both classical mechanics and in economics, equilibrium concepts are tools for analysis , not predictions of what we expect to observe. As we will see in later sections, it is possible to maintain this understanding of equilibria in the case of game theory. However, as we noted in Section 2.
For them, a solution to a game must be an outcome that a rational agent would predict using the mechanisms of rational computation alone. The interest of philosophers in game theory is more often motivated by this ambition than is that of the economist or other scientist.
A set of strategies is a NE just in case no player could improve her payoff, given the strategies of all other players in the game, by changing her strategy. Notice how closely this idea is related to the idea of strict dominance: no strategy could be a NE strategy if it is strictly dominated.
Now, almost all theorists agree that avoidance of strictly dominated strategies is a minimum requirement of economic rationality. A player who knowingly chooses a strictly dominated strategy directly violates clause iii of the definition of economic agency as given in Section 2.
This implies that if a game has an outcome that is a unique NE, as in the case of joint confession in the PD, that must be its unique solution. We can specify one class of games in which NE is always not only necessary but sufficient as a solution concept.
These are finite perfect-information games that are also zero-sum. A zero-sum game in the case of a game involving just two players is one in which one player can only be made better off by making the other player worse off. Tic-tac-toe is a simple example of such a game: any move that brings one player closer to winning brings her opponent closer to losing, and vice-versa.
In tic-tac-toe, this is a draw. However, most games do not have this property. For one thing, it is highly unlikely that theorists have yet discovered all of the possible problems. However, we can try to generalize the issues a bit.
First, there is the problem that in most non-zero-sum games, there is more than one NE, but not all NE look equally plausible as the solutions upon which strategically alert players would hit. Consider the strategic-form game below taken from Kreps , p. This game has two NE: s1-t1 and s2-t2.
Note that no rows or columns are strictly dominated here. But if Player I is playing s1 then Player II can do no better than t1, and vice-versa; and similarly for the s2-t2 pair. If NE is our only solution concept, then we shall be forced to say that either of these outcomes is equally persuasive as a solution. Note that this is not like the situation in the PD, where the socially superior situation is unachievable because it is not a NE. The players are described at each decision node of the game, each place where a player might potentially have to choose a strategy.
From each node there extend branches representing the strategy choices of a player. At the end of the final set of branches are the payoffs for every possible outcome of the game. This is the complete description of the game. There are also subgames within the full game. Subgames are the all of the subsequent strategy decisions that follow from one particular node. How will the game resolve itself? To determine the outcome it is necessary to use backward induction: to start at the last play of the game and determine what the player with the last turn of the game will do in each situation and then, given this deduction, determine what the payer with the second to last turn will do at that turn, and continue this way until the first turn is reached.
Using backward induction leads to the Subgame Perfect Nash Equilibrium of the game. The Subgame Perfect Nash Equilibrium SPNE is the solution in which every player, at every turn of the game, is playing an individually optimal strategy. For the curry pricing game illustrated in Figure Tridip is only concerned with his payoffs in red and will play Medium if Ashok plays High and get , will play Low if Ashok plays Medium and get , and will play Low if Ashok plays Low and get Because of common knowledge, Ashok knows this as well and so has only three possible outcomes.
Ashok knows that if he plays High, Tridip will play Medium and he will get ; if he plays Medium, Tridip will play Low and he will get , and if he plays Low, Tridip will play Low and he will get Since is best of the possible outcomes, Ashok will pick High.
Since Ashok picks high, Tridip will pick Medium and the game ends. The game and the payoffs are given below in the normal form game in figure But would Vito really believe that Gino would fight if he entered?
Probably not. We call this a non-credible threat: a strategy choice to dissuade a rival that is against the best interest of the player and therefore not rational. A better description of this game is therefore a sequential one, where Vito first chooses to open a pizza restaurant and Gino has to decide how to respond.
Clearly the only individually rational thing for Gino to do if Vito enters is to accommodate and, since Vito knows this he will enter. What this version of the game does is to eliminate the equilibrium based on a non-credible threat.
Vito knows that if he decides to open up a pizza shop, it is irrational for Gino to fight because Gino will make himself worse off. Since Vito knows that Gino will accommodate, the best decision for Vito is to open his restaurant. Any game can be played more than once to create a larger game. This is a normal form game in each round but a larger game when taken as a whole. This raises the possibility that a larger strategy could be employed, for example one contingent on the strategy choice of the opponent in the last round.
In other words there is an outcome that they both prefer but they fail to reach it because of the individual strategic incentive to try and do better for themselves. But if the same game was repeated more than once over the course of the school year it is quite reasonable to ask if such repetition would lead the players to a different outcome. If the players know they will face the same situation again, will they be more inclined to cooperate and reach the mutually beneficial outcome?
If we played the valedictorian game twice, then the strategies for each player would entail their strategy choices for each round of the game. Suppose, for example, the two talked before the game and acknowledged that they would both be better off cooperating and sharing. In the repeated game, is cooperation ever a Nash equilibrium strategy? To answer this question we have to think about how to solve the game.
Since this game is repeated a finite number of times, in our case two, it has a last round and similar to a sequential game, the appropriate method of solving the game is through backward induction to find the Subgame Perfect Nash Equilibrium. In repeated games at each round the subsequent games are a subgame of the overall game.
So in our case the second and final round of the game is a subgame. So to solve the game we have to first think about the outcome of the final round. Now that we know what will happen in the final round of the game, we have to ask ourselves, what is the Nash equilibrium strategy in the first round of play. Since they both know that in the final round of play the only outcome is for both to not share, they also know that there is no reason to share in the first round either.
Well, even if they agree to share, when push comes to shove, not sharing is better individually, and since not sharing is going to happen in the final round anyway, there is no way to create an incentive to share in the first round through a final round punishment mechanism.
Things change when normal-form games are repeated infinitely because there is no final round and thus backward induction does not work, there is no where to work backward from.
This aspect of the games allows space for reward and punishment strategies that might create incentives under which cooperation becomes a Nash Equilibrium. Suppose that Lena and Sven are now adults in the workplace and they work together in a company where their pay is based partly on their performance on a monthly aptitude test. The key here is that they foresee working together for as long as they can imagine. In other words they each perceive the possibility that they will keep playing this game for in indeterminate amount of time — there is no determined last round of the game and therefore no way to use backward induction to solve it.
Player have to look forward to determine optimal strategies. So would could a strategy to induce cooperation look like? The question we have to answer is: is both players playing these strategies a Nash equilibrium? To answer this question we have to figure out the best response to the strategy, is it to play the same strategy or is there a better strategy to play in response? Well, in the first period Sven will get because they both share and since they both shared in period one they will both share in period two, Sven will get and it will continue on like this forever.
What is the payoff from not cooperating? Well in the period in which Sven decides not to share, Lena will still be sharing and so Sven will get but then after that Lena will not share and Sven knows this so he will not share as well and he will therefore get 95 for every round after. Note that we are only comparing the two strategies from the moment Sven decides to deviate, as the two payoff steams are identical up to that point and therefore cancel themselves out. This says that cooperation is better as long as the extra 5 a cooperating player gets for every subsequent period after the first is better than the extra 10 the deviating player gets in the first period.
The incumbents have two options: either to compete or to accommodate. We introduce the principles of the Game Theory as follows: Critical Timeline: Management can observe behaviour as signals and as patterns in the signals.
Patterns do emerge in the observed behaviour, patterns in price movements or patterns to do with achieving growth through acquisition. The new entrant has to observe these patterns and management types of the incumbents over a considerable CTL, to forecast their reaction to his entry, is it going to be a competitive or accommodative reaction.
Incumbents for sure faced previous entrants with some kind of reaction when they tried entry, the new entrant can study and analyze this CTL to forecast the possible reaction of the incumbents especially that firms management usually they repeat their type over and over again especially when it succeeds. Reaction functions: When the new entrant will enter the market, the reaction from the incumbents will be either passive Cournot model to balance the quantity in the market, i.
Or, the reaction will be aggressive Bertnard model by cutting the price of the new entrant and accordingly start a price war. In this case, the incumbent will think this way: since the entrant entered the market and already chose a price.
If I choose to cut price and enter a price war we will all end up in loss profit is zero , so the best reaction is to choose an output that will guarantee me a profit-maximizing given the entrant's output. So after the entrant enters, the incumbent will decrease his output as per the Reaction Function diagram shown below. Because the incumbent thinks if he increases his output then the market price will go down and profit will go down with it.
Knowledge of the market here is crucial, to reach this profit-maximizing condition the market has to be in which firms must make production decisions in advance, are committed to selling all their output.
This might occur in the majority of production costs are sunk or it is costly to hold inventories, in this environment firms will do all what it takes to sell all its output.
The Cournot equilibrium here makes positive profit for the firms. In this case, the entrant when enters the market will enter in a lower price than incumbents to steal their customers and grant a market share for himself.
The incumbents will react by reducing the price even more and the rivalry between the firms will go on and will result in a perfectly competitive outcome. In this condition the competition will be fierce because the products are perfect substitutes.
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Advertisement". The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". This cookie is set by the provider Media. This cookie is used to assign the user to a specific server, thus to provide a improved and faster server time.
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. This cookie is set by linkedIn. This cookie is used to store the language preferences of a user to serve up content in that stored language the next time user visit the website.
This cookie is set by Addthis. This cookie is used to recognize the visitor upon re-entry. This cookie is set by the provider Addthis. The cookie is set by Addthis which enables the content of the website to be shared across different networking and social sharing websites. Helps users identify the users and lets the users use twitter related features from the webpage they are visiting.
This cookies is installed by Google Universal Analytics to throttle the request rate to limit the colllection of data on high traffic sites. This cookie tracks anonymous information on how visitors use the website. This cookie is set by the provider Delta projects. This cookies is set by Youtube and is used to track the views of embedded videos. This cookie is set by Google and stored under the name dounleclick.
This cookie is installed by Google Analytics. This cookie is set by the provider Getsitecontrol. This cookie is provided by Tribalfusion. This cookie is used to store information of how a user behaves on multiple websites. The domain of this cookie is owned by Rocketfuel. This cookie is set by the provider Yahoo.
This cookie is set by StatCounter Anaytics. This cookie is used to store a random ID to avoid counting a visitor more than once. The domain of this cookie is owned by Videology. The cookie sets a unique anonymous ID for a website visitor. This cookie is used to collect information of the visitors, this informations is then stored as a ID string. The main purpose of this cookie is targeting, advertesing and effective marketing.
This domain of this cookie is owned by agkn. This cookie helps to categorise the users interest and to create profiles in terms of resales of targeted marketing. The cookie is set by CasaleMedia. This cookie is set by Casalemedia and is used for targeted advertisement purposes. This cookie is setup by doubleclick. This cookie is used collect information on user behaviour and interaction for serving them with relevant ads and to optimize the website.
The cookie is set under eversttech. The cookie is set by Adhigh. This cookie is set by the Bidswitch. This cookie is set by the provider Sonobi.
The main purpose of this cookie is targeting and advertising. Used by Google DoubleClick and stores information about how the user uses the website and any other advertisement before visiting the website. The cookie is set by pubmatic.
This cookie is set by pubmatic. This is a Lijit Advertising Platform cookie. This cookie is associated with Quantserve to track anonymously how a user interact with the website. The domain of this cookie is owned by Media Innovation group. Stores information about how the user uses the website such as what pages have been loaded and any other advertisement before visiting the website for the purpose of targeted advertisements. This cookie is used to store the unique visitor ID which helps in identifying the user on their revisit, to serve retargeted ads to the visitor.
This cookie is used for serving the retargeted ads to the users. This cookie is set by the provider mookie1. The purpose of the cookie is to identify a visitor to serve relevant advertisement. The cookie is set by rlcdn. This domain of this cookie is owned by Rocketfuel.
This cookie is set by Sitescout. The domain of this cookie is owned by the Sharethrough. This cookie is used to collect information on user preference and interactioin with the website campaign content. This cookie is set by doubleclick. This cookie is set by Videology. This cookie is set by the provider AdRoll. This cookie is set by. This cookie is used to measure the number and behavior of the visitors to the website anonymously. To optimize ad relevance by collecting visitor data from multiple websites such as what pages have been loaded.
This cookies is set by AppNexus. This cookie is set by Youtube. The domain of this cookie is owned by Dataxu. The purpose of this cookie is targeting and marketing. The cookie domain is owned by Zemanta.
0コメント