Data Bloom

Metaheuristic Programming

Not all problems have a definite mathematical solution. At times, the solution space must be exhaustively searched. There may be some characteristics like a gradient descent path, a local optima that can be leveraged to reach a solution quickly. Metaheuristic Programming is designed to formulate, explore, guide, and select many solutions for a problem in an effort to find the global optima. While it is not guaranteed to be a global optima, the candidate generation and selection will lead to many good solutions that will also likely entail a best solution.

Many what-if exploratory problems fall into the metaheuristic category. If the business needs to explore the returns and investments in a situation where priors are unknown (meaning an instance of this nature was not seen prior and some assumptions must be made), Monte Carlo methods can be used to simulate the outcomes and preemptively minimize risk.

Consider a trivial (vanity) problem. Imagine being handed a puzzle of many small grayscale pieces (let us assume there is no limit on number of available pieces). You are to assemble a masterpiece from the pieces.

Mona Lisa

We want to assemble Mona Lisa from the pieces.

In [1]:
# Some quick imports for reading original Mona Lisa
from skimage.io import imshow, imread
from PIL import Image
import matplotlib.pyplot as plt
import random

# URL 
url = 'https://news.artnet.com/app/news-upload/2015/08/mona-lisa_2-e1440166499927.jpg'

# Read and convert original master piece to grayscale
orig_image = Image.fromarray(imread(url)).convert("L")
orig_image
Out[1]:

Let us build some basic image manipulation routines like ability to shard big image into multiple puzzle pieces addressable by the 2-dimensional (row,col) tuple. Let us also build some image comparison functions that we may use as cost function to guide our metaheuristic search

In [2]:
# Import PIL related imports
from PIL import ImageFilter, ImageDraw, ImageChops
import math, operator

# Generate a grayscale palette of the pieces
colors = range(0, 256)
rd = lambda x: int(round(x))

# Let us imagine each puzzle piece is 6 px wide by 4 px high
(xinc, yinc) = (100, 100)

# Mona Lisa picture is 600 x 400 px
(W, H) = (600, 400)

# Each piece is 6px x 4px thus making 100 pieces wide and 100 pieces high
(w, h) = (W / xinc, H / yinc)


# Given a bigger PIL image, and puzzle-piece coordinates, select a cropped image/puzzle
def get_block_image(img, i, j):
    return img.crop((i * w, j * h, (i + 1) * w, (j + 1) * h))


# Given many puzzles in coordinate form (idx, color), draw a puzzle on a white canvas
def draw_color_image(arr):
    # Create a new image of white canvas
    palette = Image.new('L', (W, H), 255)
    # Get a drawing handle
    dr = ImageDraw.Draw(palette)
    # For each puzzle index, seek to the correct row and col
    for (idx, color) in arr:
        x = int(idx / xinc)
        y = int(idx % xinc)
        dr.rectangle((x * w, y * h, (x + 1) * w, (y + 1) * h), fill=rd(color))
    # Return the PIL image with puzzle pieces assembled
    return palette


# Given two images or puzzles, find how different they are.
# Our selection of the puzzle piece must fit nicely and thus return
# minimum distance from the original Mona Lisa picture in its place
def rmsdiff(im1, im2):
    h = ImageChops.difference(im1, im2).histogram()
    return reduce(operator.add, map(lambda h, i: h * (i**2), h, range(256)))

Let us rip the original image into small blocks. Since we will be comparing many puzzle blocks with the original (at respective positions), let us shard the original image and cache the puzzles for reuse.

In [3]:
# Create a blank puzzle set
orig_slices = []

# Start traveling over the original image and collect puzzle slices
for x in range(xinc):
    for y in range(yinc):
        # Cache the puzzle in the global memory
        orig_slices.append(get_block_image(orig_image, x, y))

Now let us randomly place many kinds of grayscale pieces in the puzzle slots. For each such placement, let us compare how different in grayscale histogram, the puzzle placed differs from the original art in place. Of many such random placements, select the one puzzle that is least different from the original.

In [4]:
# Import some directed walk routines. We will start at zero or random color choices and compute deviations from original
from scipy.optimize import minimize_scalar

# Handy functions to convert a linear coordinate into a 2 dimensional X, Y coordinate on the image
x_coord = lambda idx: int(idx / xinc)
y_coord = lambda idx: int(idx % xinc)

# There are only 255 grays in the colorspace, let us pre-cache puzzle images for all 255
# to save computation time
gray_blocks = [
    Image.new(mode="L", size=(w, h), color=gray) for gray in range(256)
]


# Given a color puzzle at a "index"/"coordinate" return how different it is from the master Mona Lisa
def minfn(c, idx):
    sm = rmsdiff(orig_slices[idx], gray_blocks[rd(c)])
    return sm


# For all 10000 puzzle pieces, start placing random grayscale pieces
x = [255. * random.random() for x in range(xinc * yinc)]

# For each such placement, try to minimize the gap between random placement and the real monalisa
xs = [
    minimize_scalar(
        minfn, method='bounded', bounds=(0, 255), args=(idx))
    for (idx, c) in enumerate(x)
]

# Now we have the population/puzzle candidates that have presumably "a minimum" distance, 
# if not "the minimum" distance
puzzle_colors = [rd(res.x) for res in xs]

Rendering

Let us see how our assembled puzzle (presumably that is a close replica of Mona Lisa) look?

In [5]:
# Paint the puzzle image
draw_color_image(enumerate(puzzle_colors))
Out[5]:

Linear Programming

Caucus Debate

An elephant and a donkey got into a heated argument about wealth inequality. The donkey and the elephant have X and Y peanuts respectively. But donkey wanted more because he worked hard -- if donkey took away a peanut from elephant, elephant and donkey will have same number of peanuts; and that was "fair" as donkey surmised! Elephant is bigger, so elephant thought it deserved more. If elephant won a peanut over donkey, elephant would have twice the peanuts as donkey. So they argued...

What do you suppose the X and Y were? Let us compute the our best guess for this simple problem of X and Y

In [6]:
from scipy.optimize import minimize


# Random search cost function
def rosen(p):
    x, y = p[0], p[1]
    ydiff = 2 * x - 2 - y - 1  # y + 1 = 2 (x - 1) --> 2x - 2 - y -1 ; elephant gets one peanut from donkey
    xdiff = x + 1 - y + 1  # y - 1 = x + 1 --> x + 1 -y + 1 ; donkey and elephant have same after exchange
    return float((ydiff * ydiff + xdiff * xdiff))


from random import randint
#spread seeds for search across the entire Spark cluster
initial_XY = sc.parallelize(
    [(randint(0, 500), randint(0, 500)) for x in range(1000)])


#for each seed, run "simulation"
def return_cost(p):
    # Find the minimum most solution
    res = minimize(rosen, p, method='nelder-mead', options={'xtol': 1e-3})
    return (rosen(res.x), res.x
            )  #return the cost info and the coordinates of the local optima


#compute cost of each what-if scenario/simulation parallely; reduce a global result.
global_result_tuple = initial_XY.map(lambda x: return_cost(x)).reduce(
    lambda x, y: x if x[0] < y[0] else y)
#print the result that is presumably the best
print "Found X and Y as {0} to yield a status-quo-index of {1}".format(
    map(rd, global_result_tuple[1]), global_result_tuple[0])
Found X and Y as [5, 7] to yield a status-quo-index of 7.41839109015e-10

It looks like the donkey has 5 peanuts and the elephant has 7 peanuts at the beginning.

Operations Research

Bridgeway Company manufactures a printer and keyboard. The profit margins of the printer and keyboard are $30 and $20, respectively. Two types of skilled labor are required to manufacture these products: soldering and assembling. A printer requires 2 hours of soldering and 1 hour of assembling. A keyboard requires 1 hour of soldering and 1 hour of assembling. Bridgeway has 1,000 soldering hours and 800 assembling hours available per week. There are no constraints on the supply of raw materials. Demand for keyboards is unlimited, but at most 350 printers are sold each week. Bridgeway wishes to maximize its weekly total contribution margin.

Problem statement: Choose production levels for printers (X) and keyboards (Y) that: maximize Z (profit) = $30X + $20Y (objective function) and satisfy the following constraints: 2X + Y <= 1,000 (soldering time constraint) X + Y <= 800 (assembling time constraint) X <= 350 (demand constraint for printers) X => 0 (sign restriction) Y => 0 (sign restriction)

In [7]:
# Objective function -- minimize -- let us denote x as x[0] and y as x[1]
# Notice the negation to turn the minimization into a maximization problem
# c = -(30*x[0] + 20*x[1])
c = [-30, -20]

#left hand side of the constraints
A = [[2, 1], [1, 1]]

#right hand side of the constraints
B = [1000, 800]

#boundary conditions for X and Y
x0_bounds = (0, 350)
x1_bounds = (0, None)

#optimize the problem
from scipy.optimize import linprog
result = linprog(
    c, A, B, bounds=(x0_bounds, x1_bounds), options={"disp": False})

#show assignment
HTML(
    "<h4>Bridgeway will need to make <u>{0}</u> printers and <u>{1}</u> keyboards for maximizing your profits to make <u>{2}</u> USD</h4>".
    format(rd(result.x[0]), rd(result.x[1]), rd(-1 * result.fun)))
Out[7]:

Bridgeway will need to make 200 printers and 600 keyboards for maximizing your profits to make 18000 USD

Conclusion

We wish all the problems had a deterministic or a stochastic solution. At times, when we must search the solution space exhaustively, use the scale out farm processing conferred by the big data platform. At other times, when cost estimates and constraints can help guide the search traversal, leverage the directionality for quick metaheuristic outcomes.