For the tunnel, I used an Unreal plugin called procedural vortex tunnel.
Procedural Vortex Tunnel is a powerful Unreal Engine plugin designed specifically for generating dynamic, procedural tunnels. It allows developers to quickly create complex tunnel structures that can be highly customized as needed.
Open Unreal Engine, go to the “Edit” menu, and select “Plugins”. Search for “Procedural Vortex Tunnel” in the plugin window and click “Install”. Once installed, restart Unreal Engine to load the plugin. Create a tunnel:
Right-click in the Content Browser and select “Add New” -> “Blueprint Class”. Select “Actor” as the parent class, and then name your blueprint class, such as “BP_VortexTunnel”. Open the newly created blueprint class and add a new “Procedural Vortex Tunnel” component in the component panel. Configure tunnel parameters:
Select the “Procedural Vortex Tunnel” component and configure the parameters in the details panel. Adjust the tunnel’s length, radius, curvature, number of segments, and other parameters. Configure materials and textures to make the tunnel meet your artistic needs. Preview and adjust in real time:
Drag and drop the blueprint class into the scene to preview the effect of the tunnel in real time. Adjust the parameters as needed and observe the immediate changes. Add dynamic effects:
Add dynamic elements such as lights and particle systems inside the tunnel. Configure the interaction effects between these elements and the tunnel, such as light and shadow changes, particle movement, etc.
For this project, I first created a scene and used MetaHuman to design and present the character. Then, I used the IK retargeting function I learned in the course to import animation data for this character so that the character could work in front of the computer. To ensure that the animation was more smooth and natural, I also created animation layers to adjust and optimize the character’s movements, especially to solve the problem of the character’s penetration during the animation process.
In the process of using IK retargeting, I encountered some technical difficulties. For example, MetaHuman’s finger joints have one more joint than the finger joints in the imported animation data. This caused serious penetration in the character’s hand model after the animation data was imported. To solve this problem, I deleted the extra bones in MetaHuman’s skeleton chain and manually adjusted the character’s hand shape. This not only improved the character’s hand movements, making them more natural and in line with expectations, but also effectively eliminated the problem of penetration and improved the realism of the overall animation.
In addition, by manually adjusting the character’s hand shape and skeleton chain, I have a deeper understanding of the IK retargeting function and mastered more important techniques for matching bones and animation data. These experiences not only made me more comfortable in handling existing projects, but also laid a solid foundation for my future animation production. In the end, through these adjustments and optimizations, I successfully realized the animation effect of the character working in front of the computer, making the whole scene more vivid and real.
For the scene where the protagonist dozes off, I used the character’s first-person perspective to enhance the audience’s immersion and sense of substitution. Specifically, I first created a camera animation in Unreal Engine, and used keyframes to accurately adjust the camera’s position and movement trajectory to simulate the change in perspective when the character dozes off. This includes the gradual drooping of the line of sight, blurring, and slight shaking of the perspective, all of which add realism to the dozing effect.
To achieve this effect, I set keyframe animations for the camera position, adjusting the camera’s position and angle at different time points to simulate the movement of the character’s head during the dozing process. In addition, in order to better express the character’s eyelid closing and opening movements, I animated the character’s eyelids in Unreal Engine. Through meticulous keyframe adjustments, I ensured that the speed and amplitude of the eyelids’ opening and closing were in line with natural physiological reactions, thereby enhancing the realism of the animation.
After completing these camera and eyelid animations, I rendered all the shots and exported the corresponding material files. Next, I post-processed and edited these materials in Adobe Premiere Pro. In Premiere Pro, I further adjusted the rhythm of the shots, added necessary transition effects and sound effects to improve the overall visual effects and atmosphere. For example, I added some slight environmental sound effects, such as keyboard tapping and indoor background noise, to enhance the sense of presence. At the same time, I also did some color correction and sharpening on the picture to make the final video clearer and layered.
Through this series of production and post-processing, I successfully achieved the effect of the protagonist dozing off, making the entire animation more vivid, realistic, and full of narrative tension. This first-person perspective not only increased the audience’s sense of participation, but also added a lot of fun and interactivity to the whole story.
This week, the teacher showed us how to use Resolume Arena software to connect a projector and project our animation works into the exhibition hall. In this way, our project is not limited to the two-dimensional video plane effect, but can also present a unique effect like installation art, which is impressive. In future works exhibitions, I also plan to use this method to make the works more visually impactful and artistic.
The way to operate Resolume Arena includes importing media files (such as animation works), connecting a projector, adjusting projection settings, and using the controls on the software interface to control functions such as playback, transitions, and special effects. By pressing shortcuts on the keyboard or using a MIDI controller, more precise operation and interactive effects can be achieved. In addition, Resolume Arena also supports real-time image processing and audio response, allowing users to easily create unique and amazing visual effects. By flexibly using these features, you can inject more creativity and artistry into your work, enhance the display effect and audience experience.
In this week’s course, the instructor gave us a brief introduction to the software Resolume Arena and explained its basic operations in depth. Each of us learned how to use this tool to generate visual effects videos required for stage performances, and the instructor also taught us how to use the keyboard to operate to show our work. Through these courses, we not only mastered the basic skills of Resolume Arena, but also were able to use what we learned to create stunning visual effects and add more charm to stage performances.
Resolume Arena is a professional real-time video mixing and visual performance software that is widely used in concerts, stage performances, club parties and various art events. As a powerful real-time creation tool, Resolume Arena provides rich functions and flexible operation methods to help artists and visual designers create unique and amazing visual effects.
This week’s course explains in detail how to import visual works created in Touch Designer into Unreal Engine for display and rendering. By learning this skill, students can not only use Touch Designer’s powerful node system to create complex visual effects, but also use Unreal Engine’s excellent rendering capabilities to present these effects more vividly and realistically. The entire process includes data transmission, real-time interaction, and rendering optimization to ensure efficient display and smooth performance of the final work. This technology provides students with an efficient and diverse method for future work display, allowing them to explore more creative possibilities in the field of digital art and interactive media.
In the second half of this week’s course, we dive into how to do inverse kinematics (IK) retargeting of characters using Unreal Engine 5 (UE5). This technology greatly simplifies the character animation production process and makes the character’s action performance more natural and smooth. IK redirection allows animators to easily adjust a character’s body pose and motion path without having to keyframe it frame by frame, significantly reducing manual work. Specifically, we learned how to set up an IK skeletal system, adjust target position and rotation, and leverage UE5’s powerful tools to optimize character animation. In addition, we also discussed how these technologies can be applied to different types of characters and scenes, making the production of complex actions more efficient and convenient. These skills not only improve the efficiency of animation production, but also provide technical support for creating more realistic character movements.
In this class, the instructor introduced TouchDesigner and gave us some basic knowledge about it.
TouchDesigner is a visual development platform for real-time interactive content creation. It provides a powerful node-based interface that allows users to create complex real-time graphics and interactive applications by connecting various data streams and processing nodes. Users can use TouchDesigner’s functions to process audio, video, 3D graphics, sensor data, etc. to create rich and diverse real-time visual effects. It is widely used in various fields such as stage performances, art installations, exhibitions, and interactive activities.
In this week’s course, we presented our initial ideas about artifacts to our instructor. The instructor also gave us some reasonable suggestions, which helped us to have a clear positioning and development direction for our project.
Above is my initial idea for Project 2. I drew a storyboard
Story Outline
——On the eve of the deadline, the painter has been working non-stop to complete the work. He is very tired, but still insists on painting.
——During the painting process, he was too sleepy, so he fell asleep unknowingly, with his head resting on his drawing board.
——Suddenly, the painter feels himself sinking, through a fantasy tunnel into a strange but familiar scene.
——When he looks around, he finds that this scene is actually the world in the painting he is creating.
The teacher suggested to me that in the process of traveling through the tunnel, some floating objects could be added around the protagonist, such as his drawing board, his drawing scale, computer mouse, keyboard and other objects, sometimes floating in front of him, sometimes floating behind him.
In this week’s course, we learned how to use Unreal VCam to create a real-time camera in our project and add it to the Unreal project. In this way, we can observe our scene project in real time through the mobile phone, just like using VR.
In the process of using Unreal VCam, you encountered the problem of filling in the IP address and found that your IP address and the IP address of the classmate next to you are exactly the same, which may cause your mobile phone to be unable to connect to the computer. First, make sure you are in the same network environment. If you are in a school or company network, you may share the same IP segment, which is why your IP address is the same. In this case, you can try to connect with a different device or reconfigure the network to get a different IP address.
Unreal VCam is a powerful tool that turns your mobile device into a virtual camera to control and preview scenes in Unreal Engine. This is very useful for filmmaking, virtual production, and real-time animation. By using the Unreal VCam application, you can adjust the lens angle, focal length, and other camera settings in real time on the scene, which greatly improves work efficiency and creative freedom.
Unreal VCam is a powerful tool that turns your mobile device into a virtual camera to control and preview scenes in Unreal Engine. This is very useful for filmmaking, virtual production, and real-time animation. By using the Unreal VCam application, you can adjust the lens angle, focal length, and other camera settings in real time on the scene, which greatly improves work efficiency and creative freedom.
Common problems and solutions:
Connection failure: If the connection fails, check that the IP address and port number are correct and that the firewall is not blocking the connection.
Lag issues: If there is lag, make sure the network connection is stable and try to use 5GHz Wi-Fi or a wired network to connect to the computer.
App crashes: If the app crashes, try restarting the device and app to ensure that the app is compatible with the Unreal Engine version.
This week’s course was similar to last week’s, and was also divided into two groups. This group of students stayed in the classroom to study the course, while the other group of students went to the workshop for practice. In this week’s course, we mainly discussed the basic requirements and content of the Artifact project this semester, and helped each student develop a preliminary plan. In addition, we also had individual discussions to help students better advance the completion of the Artifact project.
We first clarified the goals and expected results of the Artifact project. Then, based on each student’s interests and professional fields, we helped them identify suitable Artifact topics and provided relevant resources and guidance. We encouraged students to choose a project topic that can show their unique talents and creativity.
We had a detailed discussion with each student to understand their ideas, schedules, and resource needs. We provided some suggestions and advice to help students develop a reasonable schedule and specific task assignments. We also emphasized the importance of good communication and collaboration, and encouraged students to actively communicate and cooperate with mentors and other students, support each other and learn from each other’s experience.
We also summarized and expanded each student’s project. We provided some creative ideas and ideas to help students further improve and expand their projects. At the same time, we also encourage them to keep an open mind and adjust and improve project plans at any time to adapt to possible changes and challenges.
This week’s course helped students further clarify the requirements and content of the Artifact project and provided them with targeted guidance and support. Through individual discussions and project expansion, students received more inspiration and help, laying a good foundation for their own project planning and completion. We believe that in the following studies, students will be able to give full play to their talents and creativity and successfully complete their own Artifact projects.
In the first week of our course, the students in our class were divided into two groups and arranged different learning contents. The first group of students stayed in the classroom for regular courses, while the second group of students, which was my group, was taken to a special motion capture studio to experience and learn the basic operation of the Vicon system.
Vicon is a high-precision motion capture system widely used in film and television, game development, sports science and virtual reality. It accurately records complex dynamic movements by installing a series of reflective markers on the captured object and then using multiple high-speed cameras to simultaneously capture the position and movement trajectory of these markers. This data can be used to drive virtual characters and make animations more realistic.
After entering the motion capture studio, we first learned about the basic components of the Vicon system, including cameras, reflective markers, motion capture suits and data processing software. We also learned how to properly set up and calibrate these devices to ensure that the captured data is accurate.
Next, we were fortunate to be able to experience using these motion capture devices in person. Several students put on special motion capture suits, which were covered with multiple reflective markers. The positions of these markers were carefully designed so that the camera can accurately capture their movements from all angles.
Before we officially started the motion capture, the technicians demonstrated how to set up the Vicon system. They explained in detail the placement of the camera, the calibration process, and how to use the dedicated software to monitor and adjust the capture environment. We learned that good calibration and setup are key steps to ensure the accuracy of the captured data.
When all the preparations were completed, the students wearing the motion capture suits began to perform a series of movements. The high-speed camera in the studio synchronously captured every subtle change in movement and transmitted the data to the computer in real time. Through the software interface, we can see a virtual 3D model that accurately reproduces the students’ movements. This data can then be exported and used for a variety of applications such as animation production and motion analysis.
The whole process not only gave us a deeper understanding of motion capture technology, but also allowed us to experience the complexity and fun of its actual operation. Through this practice, we not only learned theoretical knowledge, but also mastered some basic operating skills, laying a solid foundation for future learning and application.
All in all, this was a very interesting and inspiring course.
Concept of a World: A world is defined by its community, dimension, and actions. It encompasses beliefs, voice, and the ability to advance or collapse. Worlds have mythic figures, members, and rules that might appear arbitrary to outsiders. A world serves the common welfare of its members and provides magic powers and permissions to live uniquely within it. It creates relevance and meaning through agreed actions and undergoes innovations and upheavals to stay active.
A world is a vessel for stories and structures but is always evolving.
Emphasizes the importance of understanding the user rather than relying solely on the designer’s intuition. Utilizes mixed methodologies such as observational studies, interviews, personas, scenarios, user-journeys, prototyping, testing, and data analysis.