Sharpening My Tools Part 2

Developing an iOS application is full of small bits of repetitive tasks. When you develop User Interface in code, you must create and handle constraints in code. Depending on what you have to do this could become extremely repetitive and complicated. You can write each constraint yourself, but this gets time consuming and could become messy and confusing.

I decided to create a function that accepts the child view and the parent LayoutGuide, this solved the issue for if you want the child to fill the parent. It however doesn’t solve all issues, example when you don’t want the bottom side to match the parent. Initially I decided to allow booleans to be passed in for each constraint to disable them and still allow them to be manual.

However this still left my code messy and complicated. So I decided to make it more inclusive of the other needs. In the next version I decided to allow up to 3 parameters for each of the side constraints, one named for the side to enable and disable it, a constant to allow a distance from the anchor, and a target for those times when it shouldn’t be pinned to the parent element.

However this wasn’t quite enough I needed two other capabilities. The first, one I needed the ability to enable and disable constraints inside of other classes. I return a class with they NSLayoutConstraint for each constraint allowing other classes to access these. The other major capability is the ability to have width and height anchor constraint, therefore I allow these constants to be passed in as parameters.

This utility class has cleaned up my code quite a bit, I hope others will find it useful. It is available on Github at

Sharpening My Tool Part 1

I’ve been working on a full fledged pdf reader application for iOS. During the process of developing for this I have wanted to use Icons from Font Awesome; as buttons in the app.

Apple’s iOS doesn’t like SVG files or the font awesome font files, to work around this I’ve decided to make a tool that will convert Font Awesome SVG files into image files that can then be used directly in the application.

The tool that I wrote is a command line tool that generates three image files that corresponds to the 1x, 2x, and 3x that apps require. However this isn’t quite enough to finish the process, in the normal process you must then create an imageset in xCode then drag in the images. We can automate this process by creating a specific directory structure and json files to tell xCode where the files are.

The source code of this script is available on GitHub. It can be modified for use with any SVG image set, I use it with both Font Awesome Free Version and Font Awesome Pro. The source can be downloaded from

Unity Mesh Modification

As part of the technical preview for a project I’m working on I have ran into an interesting problem that I’m trying to solve. In Unity when you use the OnCollisionEnter/OnCollisionExit collision handling and need the information about the mesh it is colliding with.

Modifying the mesh requires knowledge of the triangles that you are interacting with; however OnCollisionEnter doesn’t give you this information, you need a RayCastHit to access this information. One way to get the RayCastHit is to perform a Physics.Raycast call based on the collision contacts. Once you have the RayCastHit you can use triangleIndex on that object to get the required information including the vertex information.

Going from the OnCollisionEnter collision contacts to a RayCastHit that gives you the correct information isn’t as simple as it should be. You have to get an offset point that is just behind the collision point, and then do the Raycast from there otherwise it detect the triangles further away and misses the one that it should be detecting. Below is the code that I used to get this all working.

private void OnCollisionEnter(Collision collision) {
    GameObject colGo = collision.collider.gameObject;
    points = collision.contacts;
    if (points.Length > 0) {
        for (int i = 0; i < points.Length; i++) {
            Vector3 offsetPoint = points[i].point - (points[i].normal + (Vector3.down * 1.1f));
            Debug.DrawRay(offsetPoint, points[i].normal,;
            Debug.DrawRay(offsetPoint, -points[i].normal, Color.yellow);
            Debug.LogFormat("{0}-{1}=>{2}", points[i].point, points[i].normal, offsetPoint);
            if (Physics.Raycast(offsetPoint, -points[i].normal, out RaycastHit hit, Mathf.Infinity)) {
                Debug.LogFormat("{0}||{1}", points[i].point, hit.triangleIndex);
                if (hit.triangleIndex != -1) {
                    Mesh mesh = hit.collider.gameObject.GetComponent<MeshFilter>().sharedMesh;
                    Vector3[] verts = mesh.vertices;
                    int[] triangles = mesh.triangles;
                    Vector3 _a = hit.transform.TransformPoint(verts[triangles[hit.triangleIndex * 3]]);
                    Vector3 _b = hit.transform.TransformPoint(verts[triangles[hit.triangleIndex * 3 + 1]]);
                    Vector3 _c = hit.transform.TransformPoint(verts[triangles[hit.triangleIndex * 3 + 2]]);
                    Debug.LogFormat("{0}|{1}|{2}", _a, _b, _c);
                    Debug.DrawLine(_a, _b);
                    Debug.DrawLine(_b, _c);
                    Debug.DrawLine(_c, _a);

Unity Custom Editor

This week I have been working on a Unity custom editor, with the goal of making meshes easier to work with.

It made sense to show this editor for the MeshFilter type, since that’s the script unity uses to store the Mesh object on. Anytime the MeshFilter selected in the editor the inspector shows the custom editor inspector.

The plan for the inspector was to show just the details of the Mesh; such as the number of vertices and the number of triangles. This could then be expanded to show the values of the vertices and triangles, and lastly allow them to be live editable.

The first part of the custom inspector; showing the details; was very simple, code shown below.

for (int idx = 0; idx < mesh.vertices.Length; idx++)
    string str = string.Format("{0} {1}", idx, mesh.vertices[idx].ToString());
int count = 0;
for (int idx = 0; idx < mesh.triangles.Length / 3; idx++)
    string str = string.Format(
        "{0}, {1}, {2}",

Showing the values of the vertices and triangles was simple. After viewing the output of this I came to the realization that most meshes have many duplicate vertices; hinting to me that an optimize functionality would come in handy. More on this optimization later. Below is the code used to display the mesh details.

public (Vector3[], int[]) Optimize(Mesh mesh)
    List<Vector3> verts = new List<Vector3>();
    Dictionary<int, int> nLocation = new Dictionary<int, int>();
    List<int> tris = new List<int>();
    for (int vIndex = 0; vIndex < mesh.vertices.Length; vIndex++)
        Vector3 vert = mesh.vertices[vIndex];
        int idx = verts.IndexOf(vert);
        if (idx == -1)
            idx = verts.Count;
        nLocation[vIndex] = idx;
    for (int tIndex = 0; tIndex < mesh.triangles.Length; tIndex++)
        int nLoc = nLocation[mesh.triangles[tIndex]];
    Mesh m2 = new Mesh();
    m2.triangles = new int[0];
    m2.vertices = verts.ToArray();
    m2.triangles = tris.ToArray();
    return (verts.ToArray(), tris.ToArray());

After running this code on the Cube mesh the results contain much less vertices, while retaining the original shape.

Solution #1 Unity Mesh Walls

The solution I have chosen is as follows. Use bezier curve that has 2 control points, these are each end of the line. Then set the rotation based on the tangent at the midpoint. Code shown below.

private static Vector3 CalculateTangent(float t, Vector3 p0, Vector3 p1)
    float a = 1 - t;
    float b = a * 6 * t;
    a = a * a * 3;
    float c = t * t * 3;
    return (-a * p0) + (c * p1);
public Mesh CalculateMesh()
    Mesh mesh = new Mesh();
    List<Vector3> verts = new List<Vector3>();
    Quaternion quaternion = Quaternion.LookRotation(CalculateTangent(0.5f, startPosition, endPosition));
    verts.Add( (quaternion * new Vector3(-widthOffset, 0.0f, 0.0f)) + startPosition );   // 0
    verts.Add( (quaternion * new Vector3(widthOffset, 0.0f, 0.0f)) + startPosition );    // 1
    verts.Add( (quaternion * new Vector3(-widthOffset, height, 0.0f)) + startPosition ); // 2
    verts.Add( (quaternion * new Vector3(widthOffset, height, 0.0f)) + startPosition );  // 3
    verts.Add( (quaternion * new Vector3(-widthOffset, 0.0f, 0.0f)) + endPosition );     // 4
    verts.Add( (quaternion * new Vector3(widthOffset, 0.0f, 0.0f)) + endPosition );      // 5
    verts.Add( (quaternion * new Vector3(-widthOffset, height, 0.0f)) + endPosition );   // 6
    verts.Add( (quaternion * new Vector3(widthOffset, height, 0.0f)) + endPosition );    // 7
    Face[] faces = new Face[] {
        new Face(0, 1, 2, 3),
        new Face(0, 1, 4, 5),
        new Face(0, 2, 4, 6),
        new Face(1, 3, 5, 7),
        new Face(2, 3, 6, 7),
        new Face(4, 5, 6, 7)
    List<int> tris = new List<int>();
    for (int i = 0; i < faces.Length; i++)
    mesh.vertices = verts.ToArray();
    mesh.triangles = tris.ToArray();
    return mesh;

Problem #1 Unity Mesh Walls

This is a problem I have encountered while working on my personal side project, a simulation game. Unity mesh allows developers to dynamically create meshes at runtime. Meshes take an array of Vector3s and an array of Integers to create the mesh. Vector3 is a data type that stores the X, Y, and Z positions as floating point numbers.

Creating an array of Vector3s, which is the vertices, and the array of integers; signifying the triangles; allows the developer to make any shape they would like. The array of integers which is grouped into threes, tells the mesh system which three vertices, by index from the array of vertices, make up the triangle. All triangles are rendered as one sided triangles, developers can change which side is rendered by swapping the first and last indexes of the triangle. Example 1,2,3 should be come 3,2,1 to reverse it.

Wall generation will utilize a starting and ending point, passed in as Vector3s. They also have an extrude height and an extrude width. This will allow a lot of customization in the long run. Below is an image showing the wall generation tool with two selected points that are 3 units apart in the X direction.

However when you make the same distance wall in the Z direction it looks more like the image below.

Broken Dynamically Generated Wall

The problem is when the walls are generated in the Z direction their extrude outward from the input points isn’t calculated correctly and makes the wall flat. Below is the code used to add the vertices to the list of vertices.

verts.Add(startPosition + new Vector3(0.0f, 0.0f, -widthOffset)); // 0
verts.Add(startPosition + new Vector3(0.0f, 0.0f, widthOffset));  // 1
verts.Add(startPosition + new Vector3(0.0f, height, -widthOffset)); // 2
verts.Add(startPosition + new Vector3(0.0f, height, widthOffset));  // 3
verts.Add(endPosition + new Vector3(0.0f, 0.0f, -widthOffset)); // 4
verts.Add(endPosition + new Vector3(0.0f, 0.0f, widthOffset));  // 5
verts.Add(endPosition + new Vector3(0.0f, height, -widthOffset));// 6
verts.Add(endPosition + new Vector3(0.0f, height, widthOffset)); // 7

Potential Solutions

Solution #1: Create the object using the distance between the start position and end position in the X(or Z) direction and then rotate the object towards the end point. This might work if you use the origin as the start position and then set transform.position on the object to move it to the correct position.

Solution #2: Use mathematic formula derived from the formula to calculate the tangent of a bezier curve to calculate the rotation needed at the start and end points.

Solution #3: Use bezier curve that has 4 points and set the two control points to the midpoint of the line. I have already implemented a 4 point bezier wall generation system, wouldn’t be too hard to convert. Four point bezier curve is shown below in wireframe mode.

Polygon Generation in Unity

Dynamic generation of polygonal 2d meshes is a useful stepping stone to more dynamic shapes. Unity Mesh object can be broken down into two major components, an array of Vector3s and an array of integers. The simplest polygons have equal angles and side lengths, this is an easy calculation to do; to calculate the X and Y values we use Cosine and Sine functions.

float angle = Mathf.Deg2Rad * ( 360 / sides );
Vector3[] verts = new Vector3[sides];
for (int i = 0; i < verts.Length; i++)
    float h = angle * i;
    float x = Mathf.Cos(h) * distance;
    float y = Mathf.Sin(h) * distance;
    verts[i] = new Vector3(x, y, 0.0f);

The next step of making a mesh dynamically is to determine the triangles that the face is made up of. We first figure out how many triangles we require, this can be done by taking the number of vertices and subtract 2. Now we can walk thru each triangle and determine the index of each vertex. Making the simple assumption that every triangle of the face is going to start at the first vertex; we then have to determine the two other vertices. Once we determine the vertex indices of the individual triangles we then have to add them to the mesh triangles indices list. Before we can add them to that array we must order them so that the faces are facing the correct direction.

List<int> triangles = new List<int>();
int triangleCount = verts.Length - 2;
for (int i = 0; i < triangleCount; i++)
    int a = 0;
    int b = 1 + i;
    int c = 2 + i;
    switch (faceDirection)
        case FaceDirection.FRONT:
            triangles.AddRange(new int[] { c, b, a });
        case FaceDirection.BACK:
            triangles.AddRange(new int[] { a, b, c });
        case FaceDirection.BOTH:
            triangles.AddRange(new int[] { c, b, a });
            triangles.AddRange(new int[] { a, b, c });

Now that we have both the components we need, time to apply them to the mesh object.

Mesh mesh = new Mesh();
mesh.vertices = verts;
mesh.triangles = triangles.ToArray();

Once you have the mesh object, you will have to set it to the MeshFilter component on a game object. This can be used in both the Start and Update methods.