2. Formal Model
Next we present the formal notions and problems that will be considered in the rest of the paper. The main model is basically the same as the one introduced in [
16]. However, in [
16] we dealt with an experimental environment, while in the current paper we deal with the theoretical approach, providing a completely new and original proof of the complexity of the problem.
Definition 1. Let be a set of resources and be a set of agents.
The set of possible allocations of R to A is the set . Given with , we say that the allocation α assigns each resource to agent for all .
We will use utility functions to denote the profits given by resources to agents. Given a distribution of resources, a utility function returns a real number representing the utility that an agent assigns to the corresponding distribution of resources. That is, if we use u to represent the utility function of an agent and we have , then the meaning is that the corresponding agent is more interested in distribution than in distribution .
Definition 2. A utility function is a function .
We say that u depends only on agent if, for all and fulfilling or for each , we have .
If u depends only on agent , then we also say that u is additive for agent if for all we have , where is any arbitrary agent with .
Note that if u is additive for , then it is possible to represent u using a vector with , where each element of the vector represents the utility obtained by each individual item. From a formal point of view, we can construct u without ambiguity from P: for all , . Hence, from now on, any additive utility function will be denoted just by its vector . Given , an additive utility function u is r-limited if .
If we were interested in considering non-additive utility functions, we would need to deal with the representation of preferences for
bundles of resources. This can be done in several ways (for instance, extensionally, that is, providing specific outputs for each combination of resources). However, the specific representation is relevant to analyze the complexity of the problem [
17]. Hereafter, we will only consider additive utility functions.
We use to represent the set of all possible additive utility functions. The utility functions of all agents in a tuple of agents A will be denoted by a tuple , where each is the utility function of agent .
Our optimization problem is defined so that we distribute resources in such a way that we try to maximize the utility of the agent that receives less utility. The entity responsible for finding and making this distribution will be called the auctioneer. If several allocations of resources provide the same utility to the agent receiving less utility, then the one also giving more utility to the second agent receiving less utility will be preferred, and so on. In order to formalize these preferences, a single number encapsulating the utilities of all agents for each possible allocation of resources together will be defined as follows: we add the utility of each agent for the allocation multiplied by a factor being higher for agents receiving less utility. If these factors are carefully chosen, then the goal of maximizing the utility of the agent with less utility (and, if some agents tie up, the utility of the agent with the second lowest utility, and so on) will be equivalent to maximizing that number. The term , defined later in Definition 4 will denote this number, and it will be the result of combining the utility values of the agents as defined by function , introduced next in Definition 3.
Actually, we will use these notions to establish a total order among allocations so that, given two possible different allocations, one will always be preferred over the other. If two different allocations provide the same utility to the agent receiving less utility, and also to the second agent receiving less utility, and so on up to all agents, then the indexes of the agents which these resources are given to will be used to break the tie in some arbitrary way. Note that the suitability of some
fake preferences to achieve beneficial allocations of resources under the egalitarian social welfare will obviously depend on the allocation found by the auctioneer when these lies are used. By unambiguously defining a (unique) optimum for these allocations of resources, we will be able to unambiguously evaluate the utility of each possible combination of fake preferences. Of course, the expected deviations from these optimal allocations of resources in practice (actually, just finding these allocations will be NP-hard, as we will show later) could affect the suitability of the fake preferences being used to achieve beneficial allocations, in line with the sensitivity to variations observed in [
16]. In particular, some fake preferences being beneficial under the actual optimal allocation could not be so if the auctioneer finds and carries out some other sub-optimal allocation.
Definition 3. The numeric order of non-negative numbers for base M is defined as: .
Suppose are always expected to belong to some closed numeric intervals , respectively. Note that, if , where , then the numeric order of these parameters for base M will give priority to , next to if there is a tie with , and so on. For instance, let us suppose are real numbers in the interval . If ; then is preferred over according to those priorities, that is, we give priority to the first parameter, next, to the second one if there is a tie with the first, and finally, to the third one if there is a tie with the other two.
Definition 4. Let , , and . For all , we definewhere is the ordering of set from lowest to highest (where ties with are solved in any arbitrary way, e.g., by considering lower) and we have . The egalitarian social welfare optimization problem consists in, given , finding α maximizing . We will denote the (unique) solution of the problem for A, R, U by .
Note that the definition of the previous problem is not affected by whether partially locked valuations (recall the introduction) are considered or not, because the problem only concerns finding some distribution depending on some given utility functions—regardless of whether these utility functions really reflect the true valuations of the corresponding agents or not.
Typically, only the utility of the agent receiving less utility is considered in the literature in the definition of the previous problem, so no preference is defined between allocations giving the same utility to the agent achieving less utility (note that this is equivalent to considering
). The NP-hardness of this problem is proved in [
18]. This problem variant can be trivially polynomially reduced to the variant introduced in Definition 4, which shows the NP-hardness of the latter. Next, we show that this NP-hardness also applies to the problem under the additional assumption that utility functions must be
r-limited (note that, in this case, the term
M in the previous definition will be
).
Proposition 1. Let us consider the variant of the egalitarian social welfare optimization problem given in Definition 4, where an additional input is given and all considered utility functions must be r-limited. The resulting problem is also NP-hard.
Proof. The problem in Definition 4 for the case
has already been proved to be NP-hard. In particular, in [
18] the authors prove it by reducing into it the well-known problem PARTITION: they construct an instance having exactly two agents with the same preferences. Thus, it also shows the NP-hardness of the egalitarian problem with the additional restriction of using instances with two agents whose utility functions are the same (let
G denote this particular problem). Hence, we also infer that the NP-hard problem is a more general one where the valuations of each agent are
limited. This is because the NP-hard problem
G can be trivially polynomially reduced to that
r-limited problem: we can just set
, where
v is the addition of all preferences of any of the agents. Recall that problem
G considers
. Problem
G can also be polynomially reduced to a problem taking
as in Definition 4 and assuming
r-limited preferences as follows: we just define the value of
to be reached, as the least one guaranteeing the agent with less utility gets at least
utility (note that achieving this value in
will imply the other agent also reaches
). □
Now we present the problem where an agent has to find its optimal fake utility function, that is, the (probably false) utility function allowing him to obtain the maximum utility when resources are distributed using egalitarian social welfare. Thus, we have to find the utility function satisfying that, if agent communicates it, and the rest of agents communicate the utility functions that agent estimated for them, then the utility that agent obtains (using its true utility function) after applying egalitarian social welfare rules is maximized. In Definition 5, will represent the utility that agent obtains when it communicates to the auctioneer that its utility function is f. As can be expected, in order to compute this term, we need to solve the optimization problem described in Definition 4. Thus, this maximization requires optimization of a term that also requires another optimization.
Definition 5. Given A, R, U as before and , for all let with . The optimal fake utility problem consists in, given A, R, U, and i, finding f maximizing .
Even though it could seem that this problem requires the optimization of a term whose computation requires performing another optimization, it is easy to check that finding the optimal solution (in particular, without r-limited utility functions or partially locked valuations) is not difficult at all.
Proposition 2. Let us suppose some allocation of resources provides a non-null utility to all agents. Let the utility functions of each agent estimated by agent (and the utility of agent itself when ) be . The optimal fake utility function for is , where is any positive value such that for all and l with .
Proof. We have to prove that the proposed utility function is actually the optimal one for (let us remark that c always exists). These fake preferences of satisfy that, even when all resources are assigned to , agent is still the agent whose utility turns out to be lower than the utility of any other agent that receives any resource this agent has a non-null preference for. Let denote the vector of utility functions defined by preference vector for agent and preference vectors for the other agents. Note that maximizing in turn maximizes the fake utility of agent (i.e., preferences ) in its role of the least satisfied agent provided that some non-null utility is given to the other agents (let us remark that, if this restriction is impossible to satisfy, then there is a contradiction with our initial assumption that some allocation of resources provides non-null utility to all agents). Moreover, the aim of is maximizing its true utility provided that the same constraint holds (as allocations not giving a non-null utility to all agents will never be picked by the auctioneer). Note that maximizing the utility achieved with preferences subject to that constraint is equivalent to maximizing the utility achieving with preferences subject to the same constraint. Thus, the optimal strategy for consists in sending preferences to the auctioneer. □
It is worth noting that this optimal lie does not depend at all on the utilities the liar agent estimated the other agents will send to the auctioneer, because this expression is constant with respect to those utilities. Thus, in this case, no estimation is required at all.
3. Proving the Hardness of Lying under Partially Locked Valuations
The result given before in Proposition 2 shows that taking advantage of lying in an egalitarian social welfare allocation is extremely easy if no additional conditions are required, so in this case, this allocation does not have practical usability. Fortunately, partially locked valuations naturally apply in real situations: agents will not have total freedom to lie about their preference over all available resources—without triggering obvious distrust. Next we will explicitly consider that the liar could be unable to lie about its preferences over specific resources. In the introduction, we considered that for each resource, an interval could denote the set of valuations (including the true one) the liar agent could send to the auctioneer without raising trivial suspicion. However, our capability to denote which valuations are acceptable is not required to be that rich in order to make the resulting problem -hard. Since the hardness is propagated via generalization, hardness results apply to more problem variants (and thus are more interesting) when proved for the most particular problem variants. Thus, here we consider the least general version of the problem we can prove its -hardness for: the particular case where the liar can provide any valuation for some resources, but only the true valuation for others. This is indeed a particularization of a problem version based on intervals, because both situations can be trivially expressed by using intervals and , respectively, where is the true valuation of resource .
Formally, let us consider the problem given in Definition 5 under the additional constraint that the liar cannot lie about some specific resources. For this new problem, optimal solutions cannot be trivially constructed as before. Actually, in this case, solving the problem means making a hard optimization of an expression, and in order to calculate this expression we need to handle another hard optimization, and this does make the resulting problem much harder. We show that the resulting (decision) problem is -complete.
Definition 6. Let A, R, U be as before and . Let be a set of indexes denoting the resources for which agent i cannot lie (i.e., it cannot send false preferences to the auctioneer). Given A, R, U, T, i, and a target utility for agent , the fake utility problem under partially locked preferences, denoted byFPL, consists in finding out if there exists , with for all , such that .
Theorem 1. The fake utility problem under partially locked preferences is-complete.
Proof. First we prove that the problem is in . Note that our problem can be equivalently stated as finding out whether there exists a vector of numbers , an allocation of resources , and a value such that, for all possible allocation of resources , we have: (a) ; (b) ; (c) for all ; and (d) , where (note that only condition (a) depends on the universally quantified variable ). Hence, we can define the problem as the search for something of polynomial size such that, for all things of polynomial size, some property checkable in polynomial size holds. Thus, the problem belongs to .
In order to prove the -hardness, we will construct a polynomial reduction from a -hard problem, QSAT (also known as ) into FPL. This problem consists in checking whether the expression holds, given an expression , where is a propositional logic formula denoted in Disjunctive Normal Form (DNF) depending only on propositional variables .
Let and abbreviate and , respectively. Then we have , where is given in Conjunctive Normal Form (CNF). Hereafter, we will only consider that latter expression , where we assume and for all , and all is , , , or for some h.
Given this instance of QSAT, we will create an instance of FPL from it such that there exists some , making it impossible to satisfy for all if, in the FPL instance, a specific agent reaches a specific target utility in the auctioneer’s allocation (i.e., in the egalitarian social welfare allocation) after sending some fake utilities to the auctioneer. That agent, called , will be able to reach that utility if the auctioneer does not manage to find some allocation giving at least some target utility to the agents receiving less utility. By setting its fake preferences in some way, will force the auctioneer to set variables (actually, the resources representing them) in some way, and then the auctioneer will try to set variables so as to satisfy . If it does, then will not reach its target utility with its real preferences, and if it does not, then will reach it.
The agents, resources, and preferences (i.e., valuations) of the agents for the resources in the constructed
FPL instance are schematically depicted in
Figure 1. Circles denote agents, rectangles denote resources, and arrows show the non-null preferences of agents for resources.
Formally, in addition to agent , we also consider the following agents:
, , and .
For all , we have agents and .
For all , we have agents and .
For all , we have agent .
For all , we have agent .
Besides, the set of resources R consists of the following resources:
, , and .
For all agents but agents , we have a resource with the same name as the agent.
For all , we have resources , , and . Besides, for all , we have resources and .
For all we have a resource , and for all , resources and .
For all , we have resources and .
Let . For the sake of notation simplicity, let us denote the preference of agent a for resource r by . The (non-null) preferences of agents for resources are the following:
For all , , , and . Besides, for all , .
For all , , and . Besides, for all , .
For all with (note that each is , , , or for some h) we have for each . Besides, .
, , and, for all , .
, and, for all , .
, , and, for all , .
and .
For all , . Besides, for all we have , and in addition, for all we have , and for all we have .
All agents receive 0 utility for any other resource.
The set T of resource agent cannot show fake preferences for the following: . Thus, agent can underrate or overrate its preferences for resources in as much as it wants.
Finally, Q, the required real utility to be reached by from the allocation of resources formed by the auctioneer, is set to . This completes the instance of FPL constructed from the original QSAT instance. Let us show that the reply to one instance is yes if it is yes to the other instance.
The FPL instance will simulate the QSAT instance as follows. Each assignment of resources to agents in FPL will represent a valuation of propositional variables : we will consider that the propositional variable is set to ⊤ when agent gets resource , whereas a valuation where the propositional variable is set to ⊥ will be represented when resource is assigned to agent (and the same for agents and and resource ).
Let us show that, regardless of the fake preferences set by user for resources , an allocation giving all agents at least M utility (pretend utility in the case of agent , that is, utility according to the fake preferences sent to the auctioneer) can always be reached, so only these allocations must be considered.
Note that each agent a is the only one interested in the resource with the same name a, so any optimal allocation of resources will give each resource a to agent a. Given this, each agent (respectively, ) has two possibilities to reach M utility. One of them consists in getting resource . The other one requires getting resource (resp. ) and all k resources (resp. ). The first case forces its “twin” agent (resp. ) to get (resp. ) and all k resources (resp. ) to reach M itself, whereas the latter case forces its twin agent to get resource . A similar argument applies to each pair of agents and , although no or resources exist in this case. We conclude that, in any allocation of resources where all agents achieve at least M utility, if the allocation represents setting the propositional variable to ⊤ (resp. to ⊥), then agents , will consume (receive) all resources of the forms , , , , , , , but all k resources of the form (resp. all k resources of the form ). A similar argument applies to the assignment of the propositional variable to ⊤ or ⊥: the corresponding agents will consume all the corresponding resources (in this case, without resources or ) but all k resources of the form or all k resources of the form , respectively.
These k resources left available for each variable or will be used by the auctioneer to try to satisfy each disjunctive clause in . In the FPL instance, this will translate into giving utility to the agent representing clause when it receives some resource representing one of the literals of the clause. Each agent representing a clause will achieve utility by receiving any of the resources , , , representing the specific variable valuation required by any of its literals , , or . For instance, if , then agent receives utility by receiving any of these resources: , , . Thus, giving utility to some agent this way means satisfying the logical clause , and giving to all of them this way implies satisfying . As we will see, achieving this will give the auctioneer access to an allocation increasing the utilities of the agents receiving less utility (thus, preferable), but will also give less real utility, preventing it from getting its Q target utility.
Each agent will be able to reach at least M utility in a possible second way: by getting both resources and , agent will also receive a utility ( utility for each of them). This alternative case will mean clause was not satisfied (no resource , , or was given to agent ). Since satisfying will be preferable to the auctioneer, the auctioneer will try to prevent this case.
Note that, regardless of whether all clauses are satisfied or not, making all agents , , , , and achieve at least M utility does not require giving these agents more than resources of the kinds , , , , : each pair of agents and or and will use k of them (there are pairs), and the remaining k resources will be required by the k agents . Note that we are not counting in that expression the number of necessary resources of kinds , , , , , , , , . If all clauses are satisfied, then no resource of the form will be necessary to make all of these agents reach at least M utility, and all necessary resources of the mentioned kinds will be of the forms , , , . On the contrary, in the completely opposite case where no is satisfied, resources of the forms , , , will be necessary, as well as all k resources of the form .
In fact, all the remaining resources of these forms , , , , (that is, the resources of these kinds we do not need to assign to agents , , , , and to let all of them reach M utility) will in turn be needed by agents . All of these resources will be required to let these agents reach at least M utility (in fact, exactly M): each agent needs k resources to reach M utility, and there are of them.
Agent can get at least M utility in two possible ways. On the one hand, if all agents get utility by receiving one resource , , , satisfying clause , then no agent will need any additional resource or to achieve at least M utility. Resources of kind will be needed by agents as mentioned before, but all resources will be free to be assigned to agent , and this agent will achieve exactly M utility just by receiving all of them. Let (a) be this case.
On the other hand, if some agent does not get any resource , , , , then it will need to take both resources and to achieve at least M utility, particularly the utility, as in the other case (this time, this means clause is not being satisfied). Thus, not all resources will be available to agent , and agent will not be able to achieve M utility just by taking all resources . Let (b) be this case.
By the arguments given in previous paragraphs, all allocations of resources formed by the auctioneer will be such that agents , , , , get exactly M utility and all agents get exactly utility. As we will see, the auctioneer will always manage to give at least M utility to the remaining four agents, , , , and . Thus, the resource distribution chosen by the auctioneer will totally depend on the numeric utilities reached by these four agents.
The case (a) mentioned before will happen only if it is possible to satisfy all clauses, that is, if it is possible to satisfy . As mentioned earlier, this case will enable the auctioneer to achieve a better distribution of resources for the agents according to the egalitarian social welfare. In particular, all agents . , , and will achieve at least utility. On the contrary, in case (b), happening when it is impossible to satisfy all clauses, at least one of these four agents will reach less than utility, making this case less desirable to the auctioneer. However, this case will be more profitable to according to its real preferences, as it will reach its target utility only in this case.
Let us suppose that the answer to the QSAT instance is yes. Then, there exists some making false for any . Let us see that the answer to the constructed instance of FPL is yes. Let the fake preferences of be the ones representing the valuation of making false for any . That is, if is ⊤ (respectively, ⊥) in this valuation, then pretends to have preferences (resp. 0) and (resp. 1), instead of its real preferences (recall the real preferences are 1 in both cases). Let us study the allocations of resources the auctioneer could form in this case, and let us show that the actual allocation of resources chosen by the auctioneer gives real utility to . Recall that, in all cases, resources , , , and are assigned to the agents with the same names, giving them , k, n, and M utility, respectively. We have the following cases:
- (1)
The auctioneer gives to the resources of types , agent receives 1 utility from. Thus, receives n utility from them, adding up to utility after also counting resource . In order to give enough utility to agent , no resource of types , will be available for this agent, so resource must be assigned to agent , making it reach utility after resource is also counted. Even after counting resource , agent cannot reach M utility just by taking all resources not assigned to agents , because of them are available at most (recall that cannot be satisfied in the valuation of preferred by ).
If resource is given to agent , then agent reaches utility after counting resource . In this case, agent reaches at most utility if resource is given to agent , and at most M utility when it is given to agent . In the former case, the utilities of , , , are at most , , , , respectively, whereas in the latter case, these utilities are at most , , M, and .
Alternatively, if resource is given to agent , then agents , , , achieve utilities , , , M for some if resource is given to agent (recall that agent could also receive some resources ), and utilities , , , M if that resource is given to agent .
Out of these four possible allocations in this case, the third one (with utilities , , , M) is the best one according to the egalitarian social welfare: the first allocation does not give at least M utility to all agents, and only the third one gives M, , and utilities to the three agents with less utility. Hence, the auctioneer chooses it, and this way, reaches utility (both pretend and real).
- (2)
The auctioneer does not allocate the resources to form the specific valuation of variables demanded by the fake preferences of , although receives at least one of these resources. Thus, does not receive all the resources and it gives 1 utility to, and at most, it receives of them, achieving from them at most utility. Let us see that, in any allocation of resources reached by agents , , , in this case, the multiset of their utilities will always be worse than the multiset reached in the allocation chosen in case (1), as we saw before. Thus, that allocation picked in case (1) will be preferred by the auctioneer over any allocation in this case.
Let us suppose that, by giving to up to of the resources of types and it wants and at least one of them, the auctioneer can fulfill (recall that this is impossible if all n of them are given to ). Alternatively, if the auctioneer could not fulfill , then the possibilities listed next would just be reduced. If the only resources of types and agent receives are them, and the remaining ones (at most of them) are given to (which does not have any preference for them, as all of them give a 1 utility), then resource must be given to as well, because it needs it to reach at least M utility. By counting these resources as well as resources and , agents and reach at most M utility and at least utility, respectively.
If agent gets all k resources and agent receives resource , then, after counting resources and , these agents achieve M and utility, respectively. If the remaining resource (i.e., ) is given to , then the multiset of utilities of these four agents , , , is, at best, for some , and if resource is given to agent , then this multiset is, at best, . These multisets of utilities for these four agents are less desirable than the multiset seen in case (1).
Alternatively, if agent receives resource , then the multiset of utilities of these four agents will be no more attractive than or (for some and ) depending on whether is given to agent or to agent , respectively. Both multisets of utilities are again worse than .
- (3)
Finally, we consider the case where receives none of the resources and . In this case, clearly needs resource to reach at least M utility, and some combination of n resources of types and will be given to . Let us suppose it is a combination representing some letting hold for some (if it is not, then just some cases listed next will be impossible). By receiving the mentioned resources as well as resources and , agents and get and M utility, respectively. On the one hand, if resource is given to agent then the multiset of utilities of agents , , , will be or , depending on whether is given to or to . On the other hand, if resource is given to agent , then the corresponding multisets in these two cases will be no more attractive than or , respectively. All of these multisets are again less attractive than .
Let us suppose that the answer to the QSAT instance is no. Then, there does not exist making false for any . Let us see that the answer to the constructed instance of FPL is no. That is, the allocation of resources formed by the auctioneer will not give real utility to in any case. We consider the following cases:
- (i)
Under the fake preferences set by agent for resources of types and , the auctioneer can give some or all of these resources to in such a way that gets at least fake utility from them. Then, assigning these resources to gives it fake utility (after also counting the effect of resource ), where is the excess of utility over given by these resources. Since less than n resources and will remain to be given to , making reach at least M utility implies giving to . This way, reaches utility, where e is the number of resources and given to . Since we are assuming that some can satisfy no matter what is chosen, the auctioneer can manage to leave all k resources unassigned to agents (i.e., it can satisfy all clauses ), meaning that agent can receive all of these k resources and reach M utility with them.
On the one hand, if resource is given to agent , then it obtains utility. Thus, the utilities of , , , are given by or , depending on whether is assigned to or to , respectively. On the other hand, if is given to agent , then the previous two cases turn into allocations with utility multisets and , respectively.
Note that, in addition to the previous four possible allocations, other four possible allocations arise by not giving any or resource to agent . In this case, can achieve M utility by receiving n resources of form and , which in turn implies giving to agent to let it also reach at least M utility. The translation of the previous four cases into this local rearrangement between agents and gives rise to the following utility multisets, respectively:
,
,
, and .
If , then, out of these eight possible allocations, the second one, giving , is the only one reaching strictly more than M utility for all four agents, so it is chosen by the auctioneer. Since cannot receive more than n resources or in any allocation, and neither nor are given to in this allocation, does not reach real utility in this case, as required.
If , then, out of these eight possibilities, again the second one, with utilities multiset , is the preferred one for the auctioneer (note that this is the only one where the three lowest utilities are M, , and at least ), so it is again chosen by the auctioneer. For the same reasons as before, again does not reach real utility.
- (ii)
Under the fake preferences set by agent for resources of types and , the auctioneer can give some or all of these resources to in such a way that gets at least and less than fake utility from them. By similar reasoning as before, this time, the first four allocation cases considered in the previous case yield the following utility multisets:
,
,
, and , where . In addition, the last four allocations considered in case (i) yield exactly the same utility multisets as in that case. Note that the second and fourth allocations do not provide at least M utility to all four agents. The remaining allocations in our set of eight possible allocations give at least M utility to the agent getting less utility, although only the sixth one provides utility to the second agent with less utility (note that ). Thus, the auctioneer picks it. Since does not obtain resource in this allocation, does not achieve real utility in this case.
- (iii)
Under the fake preferences set by agent for resources of types and , the auctioneer cannot give some or all of these resources to in such a way that gets at least fake utility from them. Then, the first four allocations considered in the previous cases (where is given some combination of and resources) cannot be better than in these previous cases. Thus, the allocation preferred by the auctioneer is again the sixth allocation, and again, does not get real utility.
We conclude that the answer for some QSAT instance is yes if the answer to the corresponding FPL instance is yes. It is easy to see that the FPL instance can be built from the original QSAT instance in polynomial time with respect to the size of the QSAT instance, so we have a polynomial reduction from QSAT into FPL, and the -hardness of FPL holds. □
A simple consequence of the previous result is the -completeness of another problem variant where, for each resource, an interval denotes the set of valuations the liar agent can give to it. As we saw at the beginning of this section, the -hardness of this variant is inferred by simple generalization, as it generalizes the problem in Definition 6. On the other hand, in order to prove the inclusion of this variant in class , it suffices to change condition (c) in the first paragraph of the previous proof by a condition requiring that, for all j, is within the interval specified for the j-th resource.
Another possible variant consists in accepting only r-limited utility functions. The -completeness is preserved in this case as well.
Theorem 2. Let us consider that, in the problem given in Definition 6, utility functions must be r-limited (i.e., for some given and for all agents, the addition of preferences for all resources must be r). The resulting problem is -complete.
Proof. The proof of the inclusion in class is basically the first paragraph in the proof of Theorem 1 (we just have to add the condition ). Besides, we can prove the -hardness by using almost the same proof as in Theorem 1—just a few changes must be introduced to include the r-limitation.
In the FPL instance constructed there, we set r to any value higher than the addition of preferences shown by any agent for all resources. An additional agent z and an additional resource z are added so that and, for all other agent a, we set , where u is the addition of preferences shown by agent a for all other resources. Agent z only has a non-null preference for resource z. Thus, this resource will be given to agent z in any allocation conforming the egalitarian social welfare, so agent z and resource z will be transparent to the rest of the setting.
In addition, the set T of resources agent cannot show fake preferences, for it is redefined as follows: . Since resource z will always be given to agent z, agent can underrate or overrate its preferences for resources in as much as it wants, in particular by overrating or underrating z accordingly, respectively, to keep the addition of preferences for all resources being equal to r. □