In the past week or so, I have been trying to come up with a way to make a particular linear programming model work, but without much luck. I have two decision variables in my model X1 and X2. I need to make the difference between them as high as possible. X1 and X2 are bounded in both directions, so the maximum difference between them is also bounded. There is no danger of the difference exploding to infinity. I don't care which one is larger (in fact either of them could be, and that is the primary reason why the problem is so difficult), I just want to make sure I am driving them as far apart in my model as possible.

Now, I have solved the reverse problem before. That was the problem of trying to make X1 and X2 as close together as possible. In other words, I wanted to make the difference between X1 and X2 as small as possible in a linear program. The way I did it was as follows:

X3 ≥ X1 - X2

X4 ≥ X2 - X1

X3, X4 ≥ 0

Penalize X3 and X4 heavily in the objective function.

Now, if X1 ≥ X2, it forces X4 to zero (since it is penalized in the objective function, it has no reason to go any higher), and it forces X3 to be exactly equal to X1 - X2 (once again, because of the penalty, it has no incentive to go any higher). If, on the other hand, X2 ≥ X1, it forces X3 to zero and forces X4 to be exactly equal to X2 - X1.

Either way, the sum of X3 and X4 therefore becomes |X1 - X2|. And because of the penalties on X3 and X4, they drive |X1 - X2| as small as possible, thus minimizing the difference between the two variables.

Unfortunately, I can't find an equivalent way to maximize the difference between X1 and X2. If I just change the penalties on X3 and X4 in the above formulation to incentives, the two derived variables will both rise to ∞ and make the problem unbounded.

Instead, I tried to modify the formulation to the one below:

X3 ≤ X1 - X2

X4 ≤ X2 - X1

X3, X4 ≤ 0

I then incentivized both variables heavily in the objective function. What this does is it forces one of the variables to zero because of the incentive (it has no reason to go lower). The other one is forced to -|X1 - X2|. It looked promising at first. But because of the incentive, it tries to force this difference as small as possible, defeating the entire purpose of the model. In fact, this formulation is no different from the first one, with all the signs reversed. It only minimizes the difference between X1 and X2, doing nothing to maximizing it.

What I seem to need is some way of creating a new derived variable from X1 and X2. This new variable would have the following property: It would become zero if X1 - X2 is negative. It would become equal to X1 - X2 if that quantity is positive. And the new derived variables would either be set at these values (with no incentives or penalties in the objective function), or be forced to these values when the variables are heavily incentivized. In the first case, I would then add up the two derived variables based on X1 - X2 and X2 - X1 (their sum would be equal to |X1 - X2|) and incentivize the sum heavily in the objective function. In the second case, these derived values are already incentivized, so they would automatically maximize |X1 - X2|. Unfortunately, I don't know how to create such a set of derived variables. So, I am stuck.

Have any of you had a problem like this? Do you have a solution that works? Am I missing something obvious? I would love to hear back from you. Use the comments below to let me know!

## No comments:

Post a Comment