>>12663969

>forming a rectangle

Garbage way of writing what you're trying to say. Anyway, this is a graph theory problem in disguise.

Say your grid is [math] [n] \times [n] [/math]. Say that your set of filled squares is [math] A = (A_1, A_2, \ldots, A_n) [/math], where [math]A_i[/math] is the set of column indices filled in row i. So your first example is ({1,2,3}, {1}, {1}) and your second is ({1,2}, {1}, {1,2}). So far, so good?

So now consider [math]K_n[/math]. Each set [math]A_i[/math] is a set of vertices of [math]K_n[/math]. Let [math]C_i[/math] be the clique induced by [math]A_i[/math]. Then [math]A[/math] is "good" if and only if the cliques [math]C_i[/math] are disjoint.

So we can rephrase the question as follows: If an [math]n[/math]-vertex graph (or maybe just [math]K_n[/math]) is decomposed into [math]n[/math] cliques [math]C_1,\ldots,C_n[/math], what is the maximum possible sum [math] \sum_i |C_i|[/math]?

I don't even know if graph theory will be useful here, but I just think that observation is neato.

Anyway, I think you'll get the best result by putting a roughly equal number of vertices (filled cells) in each clique (row). This is because one large clique will make disproportionately many edges available for the other cliques to use. For example, in a 5x5, there are ten edges to work with. We can give them all to one row of five and four rows of one, or to one row of four (six edges) and four rows of two (one edge each,) or to two rows of three (three edges each) and three rows of two (one edge each.) The second and third options each get us 12 filled cells, while the first one only gets us 9.

If someone else wants to continue with this idea, go ahead.