LogoLogo
  • Home
  • Projects
  • About
  • Contact

Environment Mapping with ARCore and Unity3D

Devon O. · May 05, 2018 · Shaders, Unity3D · 3 comments
23

So here’s a little trick for mapping a device camera image onto a 3D model when using Unity3D and Google’s ARCore in order to create some realistic environment reflections and further blend augmented reality with plain old reality. I originally got the idea from this Unity blog post. Before digging into John Sietsma’s solution, though, I implemented my own and figured there were big enough differences to warrant describing my own approach.

Before jumping into some code, here’s a little look at the effect in action. You can see how the floating torus reflects both the color of the candle as well as my hand waving around it. In this video, the reflection amount is set to .85 and the blur amount to 7 (those properties will be made clear later):

Overview

From a high perspective, the effect is achieved by taking the camera image, blurring it, then mapping it onto a 3d model using radial coordinates based on the world normals. Is the effect perfect? Not at all. Really the entire thing is faked. First, because we’re using the camera as an environment map, it’s entirely possible to see an item behind the 3d model reflected in front of it. Also, the UV’s of the camera image will never match the model UV’s – and, even if they somehow did, the camera image texture is not tileable so there will always be some fairly nasty visible seams. That said, though, if using a relatively complex model (the drawbacks are especially apparent on simple primitives), adding a decent blur, and blending the reflections with the original model texture, the results are, as we used to say back when I worked for the government, good enough for government work.

The basic steps, then, are these:

  1. Capture the camera image into a Texture2D instance
  2. Blit that texture into a RenderTexture instance using a blur shader
  3. Apply that blurred RenderTexture to the 3d object’s material via shader

Shaders

The first shader we’ll need is the shader applied to 3D model that is based on a basic surface shader using a BlinnPhong lighting model for nice highlights and reflection via Emission. This is really the core of the effect and the shader that does all the magic. This shader uses a Map Texture property (_MapTex), samples it from UV’s created by the ‘radialized’ world normal. That sample is used for the output Emission and becomes the reflection. This shader also does a bit of jiggery-pokery to adjust the map UV’s according to some adjusted UV properties. The reason for this is because the image captured from the device camera may be oriented in a way that doesn’t make sense for our reflections. You can see another example of this in the Google ARCore computer vision example project  EdgeDetectionBackground.shader file (which is where I, admittedly, yanked a good deal of implementation). This shader also contains a _ReflectionAmount property which ranges from 0 to 1. Obviously 0 will have no reflections, whereas 1 will show only reflections. As mentioned above, the example in the video uses a _ReflectionAmount of .85.

C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
/**
*    Copyright (c) 2018 Devon O. Wolfgang
*
*    Permission is hereby granted, free of charge, to any person obtaining a copy
*    of this software and associated documentation files (the "Software"), to deal
*    in the Software without restriction, including without limitation the rights
*    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
*    copies of the Software, and to permit persons to whom the Software is
*    furnished to do so, subject to the following conditions:
*
*    The above copyright notice and this permission notice shall be included in
*    all copies or substantial portions of the Software.
*
*    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
*    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
*    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
*    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
*    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
*    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
*    THE SOFTWARE.
*/
 
Shader "onebyonedesign/ARCoreEnvironmentMap" {
 
    Properties {
        _Color ("Color", Color) = (1,1,1,1)
        _MainTex ("Albedo (RGB)", 2D) = "white" {}
        _Shininess ("Shininess", Range (0.01, 1)) = 0.078125
        _MapTex ("Map Texture", 2D) = "white" {}
        _ReflectionAmount ("Relfection Amount", Range(0,1)) = .50
        _ReflectColor ("Reflection Color", Color) = (1,1,1,0.5)
        _UVTopLeftRight("UV Coords Top Left and Right", Vector) = (0,0,1,0)
        _UVBottomLeftRight("UV Coords Bottom Left and Right", Vector) = (0,1,1,1)
    }
 
    SubShader {
        Tags { "RenderType"="Opaque" }
        LOD 200
 
        CGPROGRAM
        #pragma surface surf BlinnPhong vertex:vert finalcolor:lightEstimation
 
        // Use shader model 3.0 target, to get nicer looking lighting
        #pragma target 3.0
 
        sampler2D _MainTex;
        sampler2D _MapTex;
 
        struct Input {
            float2 uv_MainTex;
            float2 mapUV;
        };
 
        half _Shininess;
        fixed4 _ReflectColor;
        fixed4 _Color;
        float4 _UVTopLeftRight;
        float4 _UVBottomLeftRight;
        float _ReflectionAmount;
 
        // Global ARCore light estimation
        uniform fixed3 _GlobalColorCorrection;
 
        #define PI 3.141592653589793
        inline float2 RadialCoords(float3 n)
        {
            float lon = atan2(n.z, n.x);
            float lat = acos(n.y);
            float2 sphereCoords = float2(lon, lat) * (1.0 / PI);
            return float2(sphereCoords.x * 0.5 + 0.5, 1 - sphereCoords.y);
        }
 
        void vert (inout appdata_full v, out Input o)
        {
            UNITY_INITIALIZE_OUTPUT(Input,o);
 
            // Get radial coords from world normal
            float3 worldNormal = UnityObjectToWorldNormal(v.normal);
            float2 radial = RadialCoords(normalize(worldNormal));
 
            // Properly align with adjusted UV values
            float2 uvTop = lerp(_UVTopLeftRight.xy, _UVTopLeftRight.zw, radial.x);
            float2 uvBottom = lerp(_UVBottomLeftRight.xy, _UVBottomLeftRight.zw, radial.x);
            o.mapUV = lerp(uvTop, uvBottom, radial.y);
        }
 
        // final color output
        void lightEstimation(Input IN, SurfaceOutput o, inout fixed4 color)
        {
            color.rgb *= _GlobalColorCorrection;
        }
 
        void surf (Input IN, inout SurfaceOutput o)
        {
            _Color.w = _ReflectionAmount;
 
            fixed4 mainColor = tex2D (_MainTex, IN.uv_MainTex) * _Color;
            o.Albedo = mainColor.xyz;
            o.Gloss = mainColor.w;
 
            o.Specular = _Shininess;
 
            fixed4 mapColor = tex2D(_MapTex, IN.mapUV);
            mapColor *= mainColor.w;
            o.Emission = mapColor.xyz * _ReflectColor.xyz;
            o.Alpha = mapColor.w * _ReflectColor.w;
        }
        ENDCG
    }
}

The only other necessary shader is one that creates blurs. The implementation I went with is kind of interesting. Ordinarily, to create blurs, a shader samples neighboring pixels then averages the output by dividing by the number of samples taken. Well, this one does exactly that but does it in a nice neat succinct fashion including a nifty little function that (roughly) calculates Gaussian distribution. I found this algorithm in a Unity forum post and it is, in turn, based on a Shadertoy post. So I’ll leave it to the lawyers to decide if it’s usable or not – but I’ve added my own MIT license anyway. Notice this shader has a _BlurAmount property to increase/decrease the blur effect. As mentioned, the example video uses a blur amount of 7.

C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
/**
*    Copyright (c) 2018 Devon O. Wolfgang
*
*    Permission is hereby granted, free of charge, to any person obtaining a copy
*    of this software and associated documentation files (the "Software"), to deal
*    in the Software without restriction, including without limitation the rights
*    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
*    copies of the Software, and to permit persons to whom the Software is
*    furnished to do so, subject to the following conditions:
*
*    The above copyright notice and this permission notice shall be included in
*    all copies or substantial portions of the Software.
*
*    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
*    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
*    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
*    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
*    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
*    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
*    THE SOFTWARE.
*/
 
Shader "onebyonedesign/BlurShader"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
        _BlurAmount("Blur Amount", Range(0, 50)) = 5
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100
 
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            
            #include "UnityCG.cginc"
 
            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };
 
            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };
 
 
            // blur algo below taken from https://answers.unity.com/questions/407214/gaussian-blur-shader.html
 
            // normpdf function gives us a Guassian distribution for each blur iteration;
            // this is equivalent of multiplying by hard #s 0.16,0.15,0.12,0.09, etc. in code above
            float normpdf(float x, float sigma)
            {
                return 0.39894*exp(-0.5*x*x / (sigma*sigma)) / sigma;
            }
 
            fixed4 blur(sampler2D tex, float2 uv,float blurAmount)
            {
                // get our base color...
                fixed4 col = tex2D(tex, uv);
 
                // total width/height of our blur "grid":
                const int mSize = 11;
 
                // this gives the number of times we'll iterate our blur on each side
                // (up,down,left,right) of our uv coordinate;
                // NOTE that this needs to be a const or you'll get errors about unrolling for loops
                const int iter = (mSize - 1) / 2;
 
                //run loops to do the equivalent of what's written out line by line above
                //(number of blur iterations can be easily sized up and down this way)
                for (int i = -iter; i <= iter; ++i)
                {
                    for (int j = -iter; j <= iter; ++j)
                    {
                       col += tex2D(tex, float2(uv.x + i * blurAmount, uv.y + j * blurAmount)) * normpdf(float(i), 3);
                   }
                }
 
                //return blurred color
                return col/mSize;
            }
 
            sampler2D _MainTex;
            float4 _MainTex_ST;
            float _BlurAmount;
            float4 _MainTex_TexelSize;
            
            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }
            
            fixed4 frag (v2f i) : SV_Target
            {
                half avgSize = (_MainTex_TexelSize.x+_MainTex_TexelSize.y)*.5;
                return blur(_MainTex, i.uv, _BlurAmount*avgSize);
            }
            ENDCG
        }
    }
}

Code

Finally we get to some actual set up and code. First, in order to get the device camera image, I used the Google TextureReader component included the ARCore computer vision example and attached it to a ‘controller’ game object in my Unity3D scene. Because these reflections will be blurred and do not need to be very detailed, it’s not necessary to use a large image width and height in the Inspector panel settings. I went with 800×480, myself, but you can play around with that. I also used ‘Cover Full Viewport’ for the Image Sample Mode property and ‘Image Format Color’ for the Image Format property.

In our controller code, we’ll need to cache an instance of that TextureReader and add a listener for the OnImageAvailableCallback event. Something like this:

C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/// <summary>
/// On Awake
/// </summary>
private void Awake()
{
    this.BackgroundTextureReader = GetComponent<TextureReader>();
}
 
/// <summary>
/// On Enable
/// </summary>
private void OnEnable()
{
    this.BackgroundTextureReader.OnImageAvailableCallback += OnImageAvailable;
}
 
/// <summary>
/// On Disable
/// </summary>
private void OnDisable()
{
    this.BackgroundTextureReader.OnImageAvailableCallback -= OnImageAvailable;
}

Now, the OnImageAvailable method is really the meat and potatoes of this effect. This is where we draw the captured image into a cached texture, blur the image using a material utilizing the above BlurShader, then apply it to our AR model’s material (using the above ARCoreEnvironmentMap shader). I wanted to keep this method neat so wrote it in a (kinda, sorta) functional way. I’ll include the extension functions I used down below. In any case, the OnImageAvailable method looks like this:

C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
/// <summary>
/// On Image Available
/// </summary>
private void OnImageAvailable(TextureReaderApi.ImageFormatType format, int width, int height, IntPtr pixelBuffer, int bufferSize)
{
    // If we have not yet created our AR Model, exit early
    if(this.ARGameObject==null)
        return;
 
    // Lazily create items cached for performance
    if(this.ImageTexture == null || this.ImageBytes == null || this.BlurredTexture == null)
    {
        this.ImageTexture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);
        this.BlurredTexture = new RenderTexture(width, height, 0);
        this.ImageBytes = new byte[bufferSize];
    }
 
    // Move image bytes into our ImageBytes byte array
    System.Runtime.InteropServices.Marshal.Copy(pixelBuffer, this.ImageBytes, 0, bufferSize);
 
    this.ImageTexture
        .ApplyBytes(this.ImageBytes)
        .BlitInto(this.BlurredTexture, this.BlurMat)
        .Map(ARCoreHelper.GetUVsFromTexture)
        .With(c => {
            // Update environment map shader with correct UV's and blurred map texture
            Material m = ARGameObject.GetComponentInChildren<MeshRenderer>().sharedMaterial;
            m.SetVector("_UVTopLeftRight", new Vector4(c.TopLeft.x, c.TopLeft.y, c.TopRight.x, c.TopRight.y));
            m.SetVector("_UVBottomLeftRight", new Vector4(c.BottomLeft.x, c.BottomLeft.y, c.BottomRight.x, c.BottomRight.y));
            m.SetTexture("_MapTex", BlurredTexture);
        });
}

And, aside from some helper stuff, that is really it.

Helper Stuff

In order to get the updated UV coordinates of the camera image based on device orientation, I basically yanked a number of methods from the ARCore computer vision example and placed them into an ARCoreHelper class (you can see it used above in the line ARCoreHelper.GetUVsFromTexture). The ARCoreHelper class looks like this:

C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
using UnityEngine;
using GoogleARCore;
 
/** Collection ARCore Helper functions. Taken from Google's ComputerVisionController.cs example */
 
public class ARCoreHelper
{
 
    /// <summary>
    /// Gets the UV transformation from passed texture using camera orientation and display aspect ratio
    /// </summary>
    public static DisplayUvCoords GetUVsFromTexture(Texture2D tex)
    {
        int width = tex.width;
        int height = tex.height;
 
        int cameraToDisplayRotation = GetCameraImageToDisplayRotation();
 
        float uBorder;
        float vBorder;
        GetUvBorders(width, height, out uBorder, out vBorder);
 
        DisplayUvCoords coords = new DisplayUvCoords();
 
        switch(cameraToDisplayRotation)
        {
            case 90:
                coords.TopLeft = new Vector2(1 - uBorder, 1 - vBorder);
                coords.TopRight = new Vector2(1 - uBorder, vBorder);
                coords.BottomRight = new Vector2(uBorder, vBorder);
                coords.BottomLeft = new Vector2(uBorder, 1 - vBorder);
                break;
            case 180:
                coords.TopLeft = new Vector2(uBorder, 1 - vBorder);
                coords.TopRight = new Vector2(1 - uBorder, 1 - vBorder);
                coords.BottomRight = new Vector2(1 - uBorder, vBorder);
                coords.BottomLeft = new Vector2(uBorder, vBorder);
                break;
            case 270:
                coords.TopLeft = new Vector2(uBorder, vBorder);
                coords.TopRight = new Vector2(uBorder, 1 - vBorder);
                coords.BottomRight = new Vector2(1 - uBorder, 1 - vBorder);
                coords.BottomLeft = new Vector2(1 - uBorder, vBorder);
                break;
            default:
            case 0:
                coords.TopLeft = new Vector2(1 - uBorder, vBorder);
                coords.TopRight = new Vector2(uBorder, vBorder);
                coords.BottomRight = new Vector2(uBorder, 1 - vBorder);
                coords.BottomLeft = new Vector2(1 - uBorder, 1 - vBorder);
                break;
        }
 
        return coords;
    }
 
    /// <summary>
    /// Gets the percentage of space needed to be cropped on the device camera image to match the display
    /// aspect ratio.
    /// </summary>
    public static void GetUvBorders(int width, int height, out float uBorder, out float vBorder)
    {
        float screenAspectRatio;
        var cameraToDisplayRotation = GetCameraImageToDisplayRotation();
        if(cameraToDisplayRotation == 90 || cameraToDisplayRotation == 270)
        {
            screenAspectRatio = (float)Screen.height / Screen.width;
        }
        else
        {
            screenAspectRatio = (float)Screen.width / Screen.height;
        }
 
        var imageAspectRatio = (float)width / height;
        var croppedWidth = 0.0f;
        var croppedHeight = 0.0f;
 
        if(screenAspectRatio < imageAspectRatio)
        {
            croppedWidth = height * screenAspectRatio;
            croppedHeight = height;
        }
        else
        {
            croppedWidth = width;
            croppedHeight = width / screenAspectRatio;
        }
 
        uBorder = (width - croppedWidth) / width / 2.0f;
        vBorder = (height - croppedHeight) / height / 2.0f;
    }
 
    /// <summary>
    /// Gets the rotation that needs to be applied to the device camera image in order for it to match
    /// the current orientation of the display.
    /// </summary>
    public static int GetCameraImageToDisplayRotation()
    {
#if !UNITY_EDITOR
            AndroidJavaClass cameraClass = new AndroidJavaClass("android.hardware.Camera");
            AndroidJavaClass cameraInfoClass = new AndroidJavaClass("android.hardware.Camera$CameraInfo");
            AndroidJavaObject cameraInfo = new AndroidJavaObject("android.hardware.Camera$CameraInfo");
            cameraClass.CallStatic("getCameraInfo", cameraInfoClass.GetStatic<int>("CAMERA_FACING_BACK"),
                cameraInfo);
            int cameraRotationToNaturalDisplayOrientation = cameraInfo.Get<int>("orientation");
 
            AndroidJavaClass contextClass = new AndroidJavaClass("android.content.Context");
            AndroidJavaClass unityPlayerClass = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
            AndroidJavaObject unityActivity = unityPlayerClass.GetStatic<AndroidJavaObject>("currentActivity");
            AndroidJavaObject windowManager =
                unityActivity.Call<AndroidJavaObject>("getSystemService",
                contextClass.GetStatic<string>("WINDOW_SERVICE"));
 
            AndroidJavaClass surfaceClass = new AndroidJavaClass("android.view.Surface");
            int displayRotationFromNaturalEnum = windowManager
                .Call<AndroidJavaObject>("getDefaultDisplay").Call<int>("getRotation");
 
            int displayRotationFromNatural = 0;
            if (displayRotationFromNaturalEnum == surfaceClass.GetStatic<int>("ROTATION_90"))
            {
                displayRotationFromNatural = 90;
            }
            else if (displayRotationFromNaturalEnum == surfaceClass.GetStatic<int>("ROTATION_180"))
            {
                displayRotationFromNatural = 180;
            }
            else if (displayRotationFromNaturalEnum == surfaceClass.GetStatic<int>("ROTATION_270"))
            {
                displayRotationFromNatural = 270;
            }
 
            return (cameraRotationToNaturalDisplayOrientation + displayRotationFromNatural) % 360;
#else  // !UNITY_EDITOR
        // Using Instant Preview in the Unity Editor, the display orientation is always portrait.
        return 0;
#endif  // !UNITY_EDITOR
    }
}

And finally here are some generic extension methods I’ve been slowly building up for my own use that may come in handy:

C#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
/**
*    Copyright (c) 2018 Devon O. Wolfgang
*
*    Permission is hereby granted, free of charge, to any person obtaining a copy
*    of this software and associated documentation files (the "Software"), to deal
*    in the Software without restriction, including without limitation the rights
*    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
*    copies of the Software, and to permit persons to whom the Software is
*    furnished to do so, subject to the following conditions:
*
*    The above copyright notice and this permission notice shall be included in
*    all copies or substantial portions of the Software.
*
*    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
*    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
*    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
*    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
*    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
*    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
*    THE SOFTWARE.
*/
 
using System;
using UnityEngine;
 
namespace Extensions.Functional
{
    /** Collection of generic method extensions for functional programming */
 
    public static class FunctionalExtensions
    {
        /// <summary>
        /// Perform an action on an object
        /// </summary>
        public static T With<T>(this T obj, Action<T> action)
        {
            action(obj);
            return obj;
        }
 
        /// <summary>
        /// Maps one object to another via the passed map function
        /// </summary>
        public static U Map<T, U>(this T obj, Func<T, U> map)
        {
            return map(obj);
        }
 
        /// <summary>
        /// Performs the passed action when passed condition is met
        /// </summary>
        public static T When<T>(this T obj, Func<bool> condition, Func<T,T> action)
        {
            return condition() ? action(obj) : obj;
        }
 
        /// <summary>
        /// Returns the number of objects in the passed array that meet the passed condition
        /// </summary>
        public static int Count<T>(this T[] arr, Func<T, bool> condition)
        {
            int cnt = 0;
            for(int i = 0; i<arr.Length; i++)
            {
                if(condition(arr[i]))
                    cnt++;
            }
            return cnt;
        }
 
        // Texture2D Extensions
 
        /// <summary>
        /// Load raw byte array and apply texture
        /// </summary>
        public static Texture2D ApplyBytes(this Texture2D t, byte[] bytes)
        {
            t.LoadRawTextureData(bytes);
            t.Apply();
            return t;
        }
 
        /// <summary>
        /// Blit Into passed `renderInto` RenderTexture with passed Material
        /// </summary>
        public static Texture2D BlitInto(this Texture2D t, RenderTexture renderInto, Material mat)
        {
            Graphics.Blit(t, renderInto, mat);
            return t;
        }
    }
}

And that is that. I hope this helps some folks out. Let me know, if you’d like.

 

  Facebook   Pinterest   Twitter   Google+
arcoreaugmented realityc#Unity3D
  • Flash Player 10 Panorama
    May 29, 2008 · 0 comments
    1483
    1
    Read more
  • A Creepy (Cardboard) Halloween
    October 22, 2017 · 0 comments
    So, here's a little Halloween treat (or trick, as the case may be). Just for
    4620
    40
    Read more
  • Draw it for Me
    April 01, 2011 · 3 comments
    2661
    6
    Read more
3 Comments:
  1. How about Vuforia + ARCore plugin ? will it work?

    kelvin · October 01, 2018
  2. Hey Kelvin, as long as there’s access to the raw camera texture, this method should work fine (and I believe I’ve had to use the camera image with either Vuforia or Wikitude – can’t remember which, but I’m sure there’s a way to access it in both). If you’re targeting iOS and using ARKit though, this functionality is now built in (https://medium.com/@ivannesterenko/realistic-reflections-and-environment-textures-in-arkit-2-0-d8d0f1332eed).

    Devon O. · October 06, 2018

Leave a Comment! Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Devon O. Wolfgang

AIR | Unity3D | AR/VR

Unity Certified Developer

Technical Reviewer of “The Essential Guide to Flash CS4 AIR Development” and “Starling Game Development Essentials”

Reviewer of “The Starling Handbook”

Unity Engineer at Touch Press.

Categories
  • Actionscript (95)
  • AIR (16)
  • Flash (99)
  • Games (7)
  • Liberty (13)
  • Life (53)
  • Shaders (20)
  • Unity3D (21)
Recent Comments
  • MainDepth on Unity Ripple or Shock Wave Effect
  • Devon O. on Unity Ripple or Shock Wave Effect
  • Feral_Pug on Unity Ripple or Shock Wave Effect
  • bavvireal on Unity3D Endless Runner Part I – Curved Worlds
  • Danielius Vargonas on Custom Post Processing with the LWRP
Archives
  • December 2020 (1)
  • December 2019 (1)
  • September 2019 (1)
  • February 2019 (2)
  • December 2018 (1)
  • July 2018 (1)
  • June 2018 (1)
  • May 2018 (2)
  • January 2018 (1)
  • December 2017 (2)
  • October 2017 (1)
  • September 2017 (2)
  • January 2017 (1)
  • July 2016 (1)
  • December 2015 (2)
  • March 2015 (1)
  • September 2014 (1)
  • January 2014 (1)
  • August 2013 (1)
  • July 2013 (1)
  • May 2013 (1)
  • March 2013 (2)
  • December 2012 (1)
  • November 2012 (1)
  • September 2012 (3)
  • June 2012 (2)
  • May 2012 (1)
  • April 2012 (1)
  • December 2011 (2)
  • October 2011 (3)
  • September 2011 (1)
  • August 2011 (1)
  • July 2011 (1)
  • May 2011 (2)
  • April 2011 (2)
  • March 2011 (1)
  • February 2011 (1)
  • January 2011 (2)
  • December 2010 (3)
  • October 2010 (5)
  • September 2010 (1)
  • July 2010 (2)
  • May 2010 (5)
  • April 2010 (2)
  • March 2010 (7)
  • February 2010 (5)
  • January 2010 (5)
  • December 2009 (3)
  • November 2009 (1)
  • October 2009 (5)
  • September 2009 (5)
  • August 2009 (1)
  • July 2009 (1)
  • June 2009 (2)
  • May 2009 (6)
  • April 2009 (4)
  • March 2009 (2)
  • February 2009 (4)
  • January 2009 (1)
  • December 2008 (5)
  • November 2008 (2)
  • September 2008 (1)
  • August 2008 (6)
  • July 2008 (6)
  • June 2008 (9)
  • May 2008 (4)
  • April 2008 (3)
  • March 2008 (4)
  • February 2008 (9)
  • January 2008 (7)
  • December 2007 (6)
Copyright © 2021 Devon O. Wolfgang