The answer would seem to be no. After all, if information is bad for you, you could always ignore it, continue living your life naively, and do better. Further, it is easy to write down games where a player’s payoff increases with the amount of information he has, and there are plenty of applications positively connecting information to welfare, like Condorcet jury theorem.
In reality, the answer is yes. Unfortunately, you can’t always credible commit to ignoring that information. This can lead to other players not trusting you later on in an interaction, which ultimately leads to a lower payoff for you.
Here’s an example. We begin by flipping a coin and covering it so that neither player observes which side is facing up. Player 1 then chooses whether to quit the game or continue. Quitting ends the game and gives 0 to both players. If he continues, player 2 chooses whether to call heads, tails, or pass. If she passes, both earn 1. If she calls heads or tails, player 2 earns 3 for making the correct call and -3 for making the incorrect call, while player 1 receive -1 regardless.
Because player 2 doesn’t observe the flip, her expected payoff for calling heads or tails is 0. As such, we can write the game tree as follows:
Backward induction easily gives the solution: player 2 chooses pass, so player 1 chooses continue. Both earn 1.
If information can only help, then allowing player 2 access to the result of the coin flip before she moves shouldn’t decrease her payoff. But look what happens when the coin flip is heads:
Now the solution is for player 2 to choose heads and player 1 to quit. Both earn 0!
The case where the coin landed on tails is analogous. Player 2 now chooses tails and player 1 still quits. Both earn 0, meaning player 1 is worse off knowing the result of the coin flip.
What’s going on here? The issue is credible commitment. When player 2 does not know the result of the coin flip, she can credibly commit to passing; although heads or tails could provide a greater payoff, the pass option generates the higher utility in expectation. This credible commitment assuages player 1’s concern that player 2 will screw him over, so he continues even though he could guarantee himself a break even outcome by quitting.
On the other hand, when player 2 knows the result of the coin flip, she cannot credibly commit to passing. Instead, she can’t help but pick the option (heads or tails) that gives her a payoff of 3. But this results in a commitment problem, wherein player 1 quits before player 2 picks an outcome that gives player 1 a payoff of -1. Both end up worse off because of it.
Weird counterexamples like this prevent us from making sweeping claims about whether more information is inherently a good thing. I noted at the beginning that it is easy to write down games where payoffs increase for a player as his information increases. Most game theorists would probably agree that more information is usually better. But it does not appear that we can prove general claims about the relationship.