BE in Automatic Control and Robotics | MSc Student in Computer Science | Software Developer.
I hold a Bachelor of Engineering degree in Automatic Control, Cybernetics, and Robotics and am currently pursuing a master’s degree in Computer Science. My interests are primarily in the fields of sound and image processing, machine learning, and physics. I also have a profound interest in music, both from a creative and technical perspective. I enjoy reading scientific articles on Music Information Retrieval and Generative AI in music, keen to expand my knowledge in these areas.
(MSc) Computer Science | Warsaw University of Technology (Feb 2024 - present)
(BE) Automatic Control, Cybernetics and Robotics | Gdańsk University of Technology (Oct 2020 - Feb 2024)
Working Student - Software Developer / Test Automation Developer (part time) | Nokia (Oct 2023 - Present)
Summer Trainee - Software Developer / Test Automation Developer (full time) | Nokia (Jul 2023 - Sep 2023)
My biggest solo project and bachelor thesis work was developed using Python. It presents an alternative approach to generating multitrack, full-length MIDI songs from text using ChatGPT API, genetic algorithms and probabilistic methods. LLM defines structure, which includes time signature, scales, chord progressions, and valence-arousal values, from which accompaniment, melody, bass, motif, and percussion tracks are created. The hybrid system uses emotional parameters from predicted point on valence-arousal plane, which have impact on GA fitness function and other parameters such as MIDI velocity range. Tracks and their sections are given their titles, and creative process is explained by ChatGPT in the chat window. Created compositions can be loaded and modified by further prompts (previous context is saved). The system can serve as an inspiration for musicians because of MIDI usage and no limits imposed by dominating structures in music.
To present the functionality of the system, I generated couple of songs using descriptions from Meta’s MusicGen and Google’s MusicLM sites. Presented wav files are synthesized from MIDI, so they have quite basic instrument sounding. They can however be used to synthesize with finer samples.
Prompt: Smooth jazz, with a saxophone solo, piano chords, and snare full drums | |
Prompt: 80s electronic track with melodic synthesizers, catchy beat and groovy bass | |
Prompt: Progressive rock drum and bass solo | |
Prompt: drum and bass beat with intense percussions | |
Prompt: A grand orchestral arrangement with thunderous percussion, epic brass fanfares, and soaring strings, creating a cinematic atmosphere fit for a heroic battle. | |
Prompt: Funky piece with a strong, danceable beat and a prominent bassline. A catchy melody from a keyboard adds a layer of richness and complexity to the song. | |
Prompt: Epic soundtrack using orchestral instruments. The piece builds tension, creates a sense of urgency. An a cappella chorus sing in unison, it creates a sense of power and strength. | |
Prompt: Violins and synths that inspire awe at the finiteness of life and the universe. | |
Prompt: The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds, like cymbal crashes or drum rolls. | |
Prompt: We can hear a choir, singing a Gregorian chant, and a drum machine, creating a rhythmic beat. The slow, stately sounds of strings provide a calming backdrop for the fast, complex sounds of futuristic electronic music. |
Technologies used: Python, PySide6, NumPy, Pandas, openai, matplotlib, MidiUtil, Mingus, PyGame
I was a leader of a group which aimed to create a hub of meta-learning models to train them on small number of images, enabing for further classification of desired categories. We implemented models such as MAML, Prototypical, Siamese Network and state-of-the-art EASY model. Best models achieved around 80% accuracy on 5-way, 10-shot task. In the app user is able to upload photos, name the categories, define hyperparameters and observe the results of the training. In another page images can be classified by a model chosen from a list of trained ones. I was responsible for planning the whole project, distributing tasks, code reviewing, writing documentation, testing and implementing the Siamese network.
Technologies used: Python, PyTorch, Tensorflow, NumPy, Matplotlib, OpenCV, Cuda, Pandas, Keras, PyQt5
I was a leader of a group project which aimed to turn data into sound. We created an app that takes any RGB image as an input, removes a background (if it is a photo), then finds the edges and creates MIDI out of them, assuming that height of the image influences the pitch of a note and width - the time. MIDI track is later changed into sound using implemented music synthesizer. Colors of the image have impact on synthesizer’s parameters like type of wave, filters and applied effects. I was responsible for planning the whole project, distributing tasks, code review, writing documentation and converting image data into MIDI and synth parameters.
Technologies used: Python, PyQt6, NumPy, OpenCV, SciPy, matplotlib, MidiUtil, Mingus