AUTHOR=Bei Tang TITLE=Multimodal computational modeling of EEG and artistic painting for exploring the stress-relief mechanism of urban green spaces JOURNAL=Frontiers in Psychology VOLUME=Volume 16 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1547947 DOI=10.3389/fpsyg.2025.1547947 ISSN=1664-1078 ABSTRACT=IntroductionMultimodal learning has recently opened new possibilities for integrating semantic understanding into creative domains, and models like Contrastive Language-Image Pretraining (CLIP) provide a compelling foundation for bridging text-image relationships in artistic applications. However, while CLIP demonstrates exceptional capabilities in image-text alignment, its application in dynamic, creative tasks such as freehand brushwork painting remains underexplored. Traditional methods for generating artwork using neural networks often rely on static image generation techniques, which struggle to capture the fluidity and dynamism of brushstrokes in real-time creative processes. These approaches frequently lack the interpretive flexibility required to respond to real-time textual prompts with spontaneous, expressive outcomes.MethodsTo address this, we propose ArtCLIP, a novel framework that integrates CLIP with an attention fusion mechanism to facilitate dynamic freehand brushwork painting. Our method utilizes CLIP's ability to interpret textual descriptions and visual cues in tandem with an attention-based fusion model, which enables the system to modulate brushstrokes responsively and adjust painting styles dynamically based on evolving inputs.Results and discussionWe conduct extensive experiments demonstrating that ArtCLIP achieves significant improvements in real-time artistic rendering tasks compared to baseline models. The results show enhanced adaptability to varying artistic styles and better alignment with descriptive prompts, offering a promising avenue for digital art creation. By enabling semantically driven and stylistically controllable painting generation, our approach contributes to a more interpretable and interactive form of AI-assisted creativity.