Model loading and texturing
This commit is contained in:
BIN
thirdparty/assimp/port/PyAssimp/3d_viewer_screenshot.png
vendored
Normal file
BIN
thirdparty/assimp/port/PyAssimp/3d_viewer_screenshot.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 50 KiB |
91
thirdparty/assimp/port/PyAssimp/README.md
vendored
Normal file
91
thirdparty/assimp/port/PyAssimp/README.md
vendored
Normal file
@@ -0,0 +1,91 @@
|
||||
PyAssimp Readme
|
||||
===============
|
||||
|
||||
A simple Python wrapper for Assimp using `ctypes` to access the library.
|
||||
Requires Python >= 2.6.
|
||||
|
||||
Python 3 support is mostly here, but not well tested.
|
||||
|
||||
Note that pyassimp is not complete. Many ASSIMP features are missing.
|
||||
|
||||
USAGE
|
||||
-----
|
||||
|
||||
### Complete example: 3D viewer
|
||||
|
||||
`pyassimp` comes with a simple 3D viewer that shows how to load and display a 3D
|
||||
model using a shader-based OpenGL pipeline.
|
||||
|
||||

|
||||
|
||||
To use it, from within `/port/PyAssimp`:
|
||||
|
||||
```console
|
||||
$ cd scripts
|
||||
$ python ./3D-viewer <path to your model>
|
||||
```
|
||||
|
||||
You can use this code as starting point in your applications.
|
||||
|
||||
### Writing your own code
|
||||
|
||||
To get started with `pyassimp`, examine the simpler `sample.py` script in `scripts/`,
|
||||
which illustrates the basic usage. All Assimp data structures are wrapped using
|
||||
`ctypes`. All the data+length fields in Assimp's data structures (such as
|
||||
`aiMesh::mNumVertices`, `aiMesh::mVertices`) are replaced by simple python
|
||||
lists, so you can call `len()` on them to get their respective size and access
|
||||
members using `[]`.
|
||||
|
||||
For example, to load a file named `hello.3ds` and print the first
|
||||
vertex of the first mesh, you would do (proper error handling
|
||||
substituted by assertions ...):
|
||||
|
||||
```python
|
||||
|
||||
from pyassimp import *
|
||||
scene = load('hello.3ds')
|
||||
|
||||
assert len(scene.meshes)
|
||||
mesh = scene.meshes[0]
|
||||
|
||||
assert len(mesh.vertices)
|
||||
print(mesh.vertices[0])
|
||||
|
||||
# don't forget this one, or you will leak!
|
||||
release(scene)
|
||||
|
||||
```
|
||||
|
||||
Another example to list the 'top nodes' in a
|
||||
scene:
|
||||
|
||||
```python
|
||||
|
||||
from pyassimp import *
|
||||
scene = load('hello.3ds')
|
||||
|
||||
for c in scene.rootnode.children:
|
||||
print(str(c))
|
||||
|
||||
release(scene)
|
||||
|
||||
```
|
||||
|
||||
INSTALL
|
||||
-------
|
||||
|
||||
Install `pyassimp` by running:
|
||||
|
||||
```console
|
||||
$ python setup.py install
|
||||
```
|
||||
|
||||
PyAssimp requires a assimp dynamic library (`DLL` on windows,
|
||||
`.so` on linux, `.dynlib` on macOS) in order to work. The default search directories are:
|
||||
- the current directory
|
||||
- on linux additionally: `/usr/lib`, `/usr/local/lib`,
|
||||
`/usr/lib/x86_64-linux-gnu`
|
||||
|
||||
To build that library, refer to the Assimp master `INSTALL`
|
||||
instructions. To look in more places, edit `./pyassimp/helper.py`.
|
||||
There's an `additional_dirs` list waiting for your entries.
|
||||
96
thirdparty/assimp/port/PyAssimp/README.rst
vendored
Normal file
96
thirdparty/assimp/port/PyAssimp/README.rst
vendored
Normal file
@@ -0,0 +1,96 @@
|
||||
PyAssimp: Python bindings for libassimp
|
||||
=======================================
|
||||
|
||||
A simple Python wrapper for Assimp using ``ctypes`` to access the
|
||||
library. Requires Python >= 2.6.
|
||||
|
||||
Python 3 support is mostly here, but not well tested.
|
||||
|
||||
Note that pyassimp is not complete. Many ASSIMP features are missing.
|
||||
|
||||
USAGE
|
||||
-----
|
||||
|
||||
Complete example: 3D viewer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``pyassimp`` comes with a simple 3D viewer that shows how to load and
|
||||
display a 3D model using a shader-based OpenGL pipeline.
|
||||
|
||||
.. figure:: 3d_viewer_screenshot.png
|
||||
:alt: Screenshot
|
||||
|
||||
Screenshot
|
||||
|
||||
To use it, from within ``/port/PyAssimp``:
|
||||
|
||||
::
|
||||
|
||||
$ cd scripts
|
||||
$ python ./3D-viewer <path to your model>
|
||||
|
||||
You can use this code as starting point in your applications.
|
||||
|
||||
Writing your own code
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To get started with ``pyassimp``, examine the simpler ``sample.py``
|
||||
script in ``scripts/``, which illustrates the basic usage. All Assimp
|
||||
data structures are wrapped using ``ctypes``. All the data+length fields
|
||||
in Assimp's data structures (such as ``aiMesh::mNumVertices``,
|
||||
``aiMesh::mVertices``) are replaced by simple python lists, so you can
|
||||
call ``len()`` on them to get their respective size and access members
|
||||
using ``[]``.
|
||||
|
||||
For example, to load a file named ``hello.3ds`` and print the first
|
||||
vertex of the first mesh, you would do (proper error handling
|
||||
substituted by assertions ...):
|
||||
|
||||
.. code:: python
|
||||
|
||||
|
||||
from pyassimp import *
|
||||
scene = load('hello.3ds')
|
||||
|
||||
assert len(scene.meshes)
|
||||
mesh = scene.meshes[0]
|
||||
|
||||
assert len(mesh.vertices)
|
||||
print(mesh.vertices[0])
|
||||
|
||||
# don't forget this one, or you will leak!
|
||||
release(scene)
|
||||
|
||||
Another example to list the 'top nodes' in a scene:
|
||||
|
||||
.. code:: python
|
||||
|
||||
|
||||
from pyassimp import *
|
||||
scene = load('hello.3ds')
|
||||
|
||||
for c in scene.rootnode.children:
|
||||
print(str(c))
|
||||
|
||||
release(scene)
|
||||
|
||||
INSTALL
|
||||
-------
|
||||
|
||||
Install ``pyassimp`` by running:
|
||||
|
||||
::
|
||||
|
||||
$ python setup.py install
|
||||
|
||||
PyAssimp requires a assimp dynamic library (``DLL`` on windows, ``.so``
|
||||
on linux, ``.dynlib`` on macOS) in order to work. The default search
|
||||
directories are:
|
||||
|
||||
- the current directory
|
||||
- on linux additionally: ``/usr/lib``, ``/usr/local/lib``,
|
||||
``/usr/lib/x86_64-linux-gnu``
|
||||
|
||||
To build that library, refer to the Assimp master ``INSTALL``
|
||||
instructions. To look in more places, edit ``./pyassimp/helper.py``.
|
||||
There's an ``additional_dirs`` list waiting for your entries.
|
||||
96
thirdparty/assimp/port/PyAssimp/gen/materialgen.py
vendored
Normal file
96
thirdparty/assimp/port/PyAssimp/gen/materialgen.py
vendored
Normal file
@@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- Coding: UTF-8 -*-
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Open Asset Import Library (ASSIMP)
|
||||
# ---------------------------------------------------------------------------
|
||||
#
|
||||
# Copyright (c) 2006-2010, ASSIMP Development Team
|
||||
#
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use of this software in source and binary forms,
|
||||
# with or without modification, are permitted provided that the following
|
||||
# conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above
|
||||
# copyright notice, this list of conditions and the
|
||||
# following disclaimer.
|
||||
#
|
||||
# * Redistributions in binary form must reproduce the above
|
||||
# copyright notice, this list of conditions and the
|
||||
# following disclaimer in the documentation and/or other
|
||||
# materials provided with the distribution.
|
||||
#
|
||||
# * Neither the name of the ASSIMP team, nor the names of its
|
||||
# contributors may be used to endorse or promote products
|
||||
# derived from this software without specific prior
|
||||
# written permission of the ASSIMP Development Team.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
"""Update PyAssimp's texture type constants C/C++ headers.
|
||||
|
||||
This script is meant to be executed in the source tree, directly from
|
||||
port/PyAssimp/gen
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
|
||||
REenumTextureType = re.compile(r''
|
||||
r'enum\saiTextureType' # enum aiTextureType
|
||||
r'[^{]*?\{' # {
|
||||
r'(?P<code>.*?)' # code
|
||||
r'\};' # };
|
||||
, re.IGNORECASE + re.DOTALL + re.MULTILINE)
|
||||
|
||||
# Replace comments
|
||||
RErpcom = re.compile(r''
|
||||
r'\s*(/\*+\s|\*+/|\B\*\s?|///?!?)' # /**
|
||||
r'(?P<line>.*?)' # * line
|
||||
, re.IGNORECASE + re.DOTALL)
|
||||
|
||||
# Remove trailing commas
|
||||
RErmtrailcom = re.compile(r',$', re.IGNORECASE + re.DOTALL)
|
||||
|
||||
# Remove #ifdef __cplusplus
|
||||
RErmifdef = re.compile(r''
|
||||
r'#ifndef SWIG' # #ifndef SWIG
|
||||
r'(?P<code>.*)' # code
|
||||
r'#endif(\s*//\s*!?\s*SWIG)*' # #endif
|
||||
, re.IGNORECASE + re.DOTALL)
|
||||
|
||||
path = '../../../include/assimp'
|
||||
|
||||
files = os.listdir (path)
|
||||
enumText = ''
|
||||
for fileName in files:
|
||||
if fileName.endswith('.h'):
|
||||
text = open(os.path.join(path, fileName)).read()
|
||||
for enum in REenumTextureType.findall(text):
|
||||
enumText = enum
|
||||
|
||||
text = ''
|
||||
for line in enumText.split('\n'):
|
||||
line = line.lstrip().rstrip()
|
||||
line = RErmtrailcom.sub('', line)
|
||||
text += RErpcom.sub('# \g<line>', line) + '\n'
|
||||
text = RErmifdef.sub('', text)
|
||||
|
||||
file = open('material.py', 'w')
|
||||
file.write(text)
|
||||
file.close()
|
||||
|
||||
print("Generation done. You can now review the file 'material.py' and merge it.")
|
||||
290
thirdparty/assimp/port/PyAssimp/gen/structsgen.py
vendored
Normal file
290
thirdparty/assimp/port/PyAssimp/gen/structsgen.py
vendored
Normal file
@@ -0,0 +1,290 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- Coding: UTF-8 -*-
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Open Asset Import Library (ASSIMP)
|
||||
# ---------------------------------------------------------------------------
|
||||
#
|
||||
# Copyright (c) 2006-2010, ASSIMP Development Team
|
||||
#
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use of this software in source and binary forms,
|
||||
# with or without modification, are permitted provided that the following
|
||||
# conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above
|
||||
# copyright notice, this list of conditions and the
|
||||
# following disclaimer.
|
||||
#
|
||||
# * Redistributions in binary form must reproduce the above
|
||||
# copyright notice, this list of conditions and the
|
||||
# following disclaimer in the documentation and/or other
|
||||
# materials provided with the distribution.
|
||||
#
|
||||
# * Neither the name of the ASSIMP team, nor the names of its
|
||||
# contributors may be used to endorse or promote products
|
||||
# derived from this software without specific prior
|
||||
# written permission of the ASSIMP Development Team.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
"""Update PyAssimp's data structures to keep up with the
|
||||
C/C++ headers.
|
||||
|
||||
This script is meant to be executed in the source tree, directly from
|
||||
port/PyAssimp/gen
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
|
||||
#==[regexps]=================================================
|
||||
|
||||
# Clean desc
|
||||
REdefine = re.compile(r''
|
||||
r'(?P<desc>)' # /** *desc */
|
||||
r'#\s*define\s(?P<name>[^(\n]+?)\s(?P<code>.+)$' # #define name value
|
||||
, re.MULTILINE)
|
||||
|
||||
# Get structs
|
||||
REstructs = re.compile(r''
|
||||
#r'//\s?[\-]*\s(?P<desc>.*?)\*/\s' # /** *desc */
|
||||
#r'//\s?[\-]*(?P<desc>.*?)\*/(?:.*?)' # garbage
|
||||
r'//\s?[\-]*\s(?P<desc>.*?)\*/\W*?' # /** *desc */
|
||||
r'struct\s(?:ASSIMP_API\s)?(?P<name>[a-z][a-z0-9_]\w+\b)' # struct name
|
||||
r'[^{]*?\{' # {
|
||||
r'(?P<code>.*?)' # code
|
||||
r'\}\s*(PACK_STRUCT)?;' # };
|
||||
, re.IGNORECASE + re.DOTALL + re.MULTILINE)
|
||||
|
||||
# Clean desc
|
||||
REdesc = re.compile(r''
|
||||
r'^\s*?([*]|/\*\*)(?P<line>.*?)' # * line
|
||||
, re.IGNORECASE + re.DOTALL + re.MULTILINE)
|
||||
|
||||
# Remove #ifdef __cplusplus
|
||||
RErmifdef = re.compile(r''
|
||||
r'#ifdef __cplusplus' # #ifdef __cplusplus
|
||||
r'(?P<code>.*)' # code
|
||||
r'#endif(\s*//\s*!?\s*__cplusplus)*' # #endif
|
||||
, re.IGNORECASE + re.DOTALL)
|
||||
|
||||
# Replace comments
|
||||
RErpcom = re.compile(r''
|
||||
r'\s*(/\*+\s|\*+/|\B\*\s|///?!?)' # /**
|
||||
r'(?P<line>.*?)' # * line
|
||||
, re.IGNORECASE + re.DOTALL)
|
||||
|
||||
# Restructure
|
||||
def GetType(type, prefix='c_'):
|
||||
t = type
|
||||
while t.endswith('*'):
|
||||
t = t[:-1]
|
||||
if t[:5] == 'const':
|
||||
t = t[5:]
|
||||
|
||||
# skip some types
|
||||
if t in skiplist:
|
||||
return None
|
||||
|
||||
t = t.strip()
|
||||
types = {'unsigned int':'uint', 'unsigned char':'ubyte',}
|
||||
if t in types:
|
||||
t = types[t]
|
||||
t = prefix + t
|
||||
while type.endswith('*'):
|
||||
t = "POINTER(" + t + ")"
|
||||
type = type[:-1]
|
||||
return t
|
||||
|
||||
def restructure( match ):
|
||||
type = match.group("type")
|
||||
if match.group("struct") == "":
|
||||
type = GetType(type)
|
||||
elif match.group("struct") == "C_ENUM ":
|
||||
type = "c_uint"
|
||||
else:
|
||||
type = GetType(type[2:], '')
|
||||
if type is None:
|
||||
return ''
|
||||
if match.group("index"):
|
||||
type = type + "*" + match.group("index")
|
||||
|
||||
result = ""
|
||||
for name in match.group("name").split(','):
|
||||
result += "(\"" + name.strip() + "\", "+ type + "),"
|
||||
|
||||
return result
|
||||
|
||||
RErestruc = re.compile(r''
|
||||
r'(?P<struct>C_STRUCT\s|C_ENUM\s|)' # [C_STRUCT]
|
||||
r'(?P<type>\w+\s?\w+?[*]*)\s' # type
|
||||
#r'(?P<name>\w+)' # name
|
||||
r'(?P<name>\w+|[a-z0-9_, ]+)' # name
|
||||
r'(:?\[(?P<index>\w+)\])?;' # []; (optional)
|
||||
, re.DOTALL)
|
||||
#==[template]================================================
|
||||
template = """
|
||||
class $NAME$(Structure):
|
||||
\"\"\"
|
||||
$DESCRIPTION$
|
||||
\"\"\"
|
||||
$DEFINES$
|
||||
_fields_ = [
|
||||
$FIELDS$
|
||||
]
|
||||
"""
|
||||
|
||||
templateSR = """
|
||||
class $NAME$(Structure):
|
||||
\"\"\"
|
||||
$DESCRIPTION$
|
||||
\"\"\"
|
||||
$DEFINES$
|
||||
|
||||
$NAME$._fields_ = [
|
||||
$FIELDS$
|
||||
]
|
||||
"""
|
||||
|
||||
skiplist = ("FileIO", "File", "locateFromAssimpHeap",'LogStream','MeshAnim','AnimMesh')
|
||||
|
||||
#============================================================
|
||||
def Structify(fileName):
|
||||
file = open(fileName, 'r')
|
||||
text = file.read()
|
||||
result = []
|
||||
|
||||
# Get defines.
|
||||
defs = REdefine.findall(text)
|
||||
# Create defines
|
||||
defines = "\n"
|
||||
for define in defs:
|
||||
# Clean desc
|
||||
desc = REdesc.sub('', define[0])
|
||||
# Replace comments
|
||||
desc = RErpcom.sub('#\g<line>', desc)
|
||||
defines += desc
|
||||
if len(define[2].strip()):
|
||||
# skip non-integral defines, we can support them right now
|
||||
try:
|
||||
int(define[2],0)
|
||||
except:
|
||||
continue
|
||||
defines += " "*4 + define[1] + " = " + define[2] + "\n"
|
||||
|
||||
|
||||
# Get structs
|
||||
rs = REstructs.finditer(text)
|
||||
|
||||
fileName = os.path.basename(fileName)
|
||||
print fileName
|
||||
for r in rs:
|
||||
name = r.group('name')[2:]
|
||||
desc = r.group('desc')
|
||||
|
||||
# Skip some structs
|
||||
if name in skiplist:
|
||||
continue
|
||||
|
||||
text = r.group('code')
|
||||
|
||||
# Clean desc
|
||||
desc = REdesc.sub('', desc)
|
||||
|
||||
desc = "See '"+ fileName +"' for details." #TODO
|
||||
|
||||
# Remove #ifdef __cplusplus
|
||||
text = RErmifdef.sub('', text)
|
||||
|
||||
# Whether the struct contains more than just POD
|
||||
primitive = text.find('C_STRUCT') == -1
|
||||
|
||||
# Restructure
|
||||
text = RErestruc.sub(restructure, text)
|
||||
# Replace comments
|
||||
text = RErpcom.sub('# \g<line>', text)
|
||||
text = text.replace("),#", "),\n#")
|
||||
text = text.replace("#", "\n#")
|
||||
text = "".join([l for l in text.splitlines(True) if not l.strip().endswith("#")]) # remove empty comment lines
|
||||
|
||||
# Whether it's selfreferencing: ex. struct Node { Node* parent; };
|
||||
selfreferencing = text.find('POINTER('+name+')') != -1
|
||||
|
||||
complex = name == "Scene"
|
||||
|
||||
# Create description
|
||||
description = ""
|
||||
for line in desc.split('\n'):
|
||||
description += " "*4 + line.strip() + "\n"
|
||||
description = description.rstrip()
|
||||
|
||||
# Create fields
|
||||
fields = ""
|
||||
for line in text.split('\n'):
|
||||
fields += " "*12 + line.strip() + "\n"
|
||||
fields = fields.strip()
|
||||
|
||||
if selfreferencing:
|
||||
templ = templateSR
|
||||
else:
|
||||
templ = template
|
||||
|
||||
# Put it all together
|
||||
text = templ.replace('$NAME$', name)
|
||||
text = text.replace('$DESCRIPTION$', description)
|
||||
text = text.replace('$FIELDS$', fields)
|
||||
|
||||
if ((name.lower() == fileName.split('.')[0][2:].lower()) and (name != 'Material')) or name == "String":
|
||||
text = text.replace('$DEFINES$', defines)
|
||||
else:
|
||||
text = text.replace('$DEFINES$', '')
|
||||
|
||||
|
||||
result.append((primitive, selfreferencing, complex, text))
|
||||
|
||||
return result
|
||||
|
||||
text = "#-*- coding: UTF-8 -*-\n\n"
|
||||
text += "from ctypes import POINTER, c_int, c_uint, c_size_t, c_char, c_float, Structure, c_char_p, c_double, c_ubyte\n\n"
|
||||
|
||||
structs1 = ""
|
||||
structs2 = ""
|
||||
structs3 = ""
|
||||
structs4 = ""
|
||||
|
||||
path = '../../../include/assimp'
|
||||
files = os.listdir (path)
|
||||
#files = ["aiScene.h", "aiTypes.h"]
|
||||
for fileName in files:
|
||||
if fileName.endswith('.h'):
|
||||
for struct in Structify(os.path.join(path, fileName)):
|
||||
primitive, sr, complex, struct = struct
|
||||
if primitive:
|
||||
structs1 += struct
|
||||
elif sr:
|
||||
structs2 += struct
|
||||
elif complex:
|
||||
structs4 += struct
|
||||
else:
|
||||
structs3 += struct
|
||||
|
||||
text += structs1 + structs2 + structs3 + structs4
|
||||
|
||||
file = open('structs.py', 'w')
|
||||
file.write(text)
|
||||
file.close()
|
||||
|
||||
print("Generation done. You can now review the file 'structs.py' and merge it.")
|
||||
1
thirdparty/assimp/port/PyAssimp/pyassimp/__init__.py
vendored
Normal file
1
thirdparty/assimp/port/PyAssimp/pyassimp/__init__.py
vendored
Normal file
@@ -0,0 +1 @@
|
||||
from .core import *
|
||||
546
thirdparty/assimp/port/PyAssimp/pyassimp/core.py
vendored
Normal file
546
thirdparty/assimp/port/PyAssimp/pyassimp/core.py
vendored
Normal file
@@ -0,0 +1,546 @@
|
||||
"""
|
||||
PyAssimp
|
||||
|
||||
This is the main-module of PyAssimp.
|
||||
"""
|
||||
|
||||
import sys
|
||||
if sys.version_info < (2,6):
|
||||
raise RuntimeError('pyassimp: need python 2.6 or newer')
|
||||
|
||||
# xrange was renamed range in Python 3 and the original range from Python 2 was removed.
|
||||
# To keep compatibility with both Python 2 and 3, xrange is set to range for version 3.0 and up.
|
||||
if sys.version_info >= (3,0):
|
||||
xrange = range
|
||||
|
||||
|
||||
try: import numpy
|
||||
except ImportError: numpy = None
|
||||
import logging
|
||||
import ctypes
|
||||
logger = logging.getLogger("pyassimp")
|
||||
# attach default null handler to logger so it doesn't complain
|
||||
# even if you don't attach another handler to logger
|
||||
logger.addHandler(logging.NullHandler())
|
||||
|
||||
from . import structs
|
||||
from . import helper
|
||||
from . import postprocess
|
||||
from .errors import AssimpError
|
||||
|
||||
class AssimpLib(object):
|
||||
"""
|
||||
Assimp-Singleton
|
||||
"""
|
||||
load, load_mem, export, export_blob, release, dll = helper.search_library()
|
||||
_assimp_lib = AssimpLib()
|
||||
|
||||
def make_tuple(ai_obj, type = None):
|
||||
res = None
|
||||
|
||||
#notes:
|
||||
# ai_obj._fields_ = [ ("attr", c_type), ... ]
|
||||
# getattr(ai_obj, e[0]).__class__ == float
|
||||
|
||||
if isinstance(ai_obj, structs.Matrix4x4):
|
||||
if numpy:
|
||||
res = numpy.array([getattr(ai_obj, e[0]) for e in ai_obj._fields_]).reshape((4,4))
|
||||
#import pdb;pdb.set_trace()
|
||||
else:
|
||||
res = [getattr(ai_obj, e[0]) for e in ai_obj._fields_]
|
||||
res = [res[i:i+4] for i in xrange(0,16,4)]
|
||||
elif isinstance(ai_obj, structs.Matrix3x3):
|
||||
if numpy:
|
||||
res = numpy.array([getattr(ai_obj, e[0]) for e in ai_obj._fields_]).reshape((3,3))
|
||||
else:
|
||||
res = [getattr(ai_obj, e[0]) for e in ai_obj._fields_]
|
||||
res = [res[i:i+3] for i in xrange(0,9,3)]
|
||||
else:
|
||||
if numpy:
|
||||
res = numpy.array([getattr(ai_obj, e[0]) for e in ai_obj._fields_])
|
||||
else:
|
||||
res = [getattr(ai_obj, e[0]) for e in ai_obj._fields_]
|
||||
|
||||
return res
|
||||
|
||||
# Returns unicode object for Python 2, and str object for Python 3.
|
||||
def _convert_assimp_string(assimp_string):
|
||||
if sys.version_info >= (3, 0):
|
||||
return str(assimp_string.data, errors='ignore')
|
||||
else:
|
||||
return unicode(assimp_string.data, errors='ignore')
|
||||
|
||||
# It is faster and more correct to have an init function for each assimp class
|
||||
def _init_face(aiFace):
|
||||
aiFace.indices = [aiFace.mIndices[i] for i in range(aiFace.mNumIndices)]
|
||||
assimp_struct_inits = { structs.Face : _init_face }
|
||||
|
||||
def call_init(obj, caller = None):
|
||||
if helper.hasattr_silent(obj,'contents'): #pointer
|
||||
_init(obj.contents, obj, caller)
|
||||
else:
|
||||
_init(obj,parent=caller)
|
||||
|
||||
def _is_init_type(obj):
|
||||
|
||||
if obj and helper.hasattr_silent(obj,'contents'): #pointer
|
||||
return _is_init_type(obj[0])
|
||||
# null-pointer case that arises when we reach a mesh attribute
|
||||
# like mBitangents which use mNumVertices rather than mNumBitangents
|
||||
# so it breaks the 'is iterable' check.
|
||||
# Basically:
|
||||
# FIXME!
|
||||
elif not bool(obj):
|
||||
return False
|
||||
tname = obj.__class__.__name__
|
||||
return not (tname[:2] == 'c_' or tname == 'Structure' \
|
||||
or tname == 'POINTER') and not isinstance(obj, (int, str, bytes))
|
||||
|
||||
def _init(self, target = None, parent = None):
|
||||
"""
|
||||
Custom initialize() for C structs, adds safely accessible member functionality.
|
||||
|
||||
:param target: set the object which receive the added methods. Useful when manipulating
|
||||
pointers, to skip the intermediate 'contents' deferencing.
|
||||
"""
|
||||
if not target:
|
||||
target = self
|
||||
|
||||
dirself = dir(self)
|
||||
for m in dirself:
|
||||
|
||||
if m.startswith("_"):
|
||||
continue
|
||||
|
||||
if m.startswith('mNum'):
|
||||
if 'm' + m[4:] in dirself:
|
||||
continue # will be processed later on
|
||||
else:
|
||||
name = m[1:].lower()
|
||||
|
||||
obj = getattr(self, m)
|
||||
setattr(target, name, obj)
|
||||
continue
|
||||
|
||||
if m == 'mName':
|
||||
target.name = str(_convert_assimp_string(self.mName))
|
||||
target.__class__.__repr__ = lambda x: str(x.__class__) + "(" + getattr(x, 'name','') + ")"
|
||||
target.__class__.__str__ = lambda x: getattr(x, 'name', '')
|
||||
continue
|
||||
|
||||
name = m[1:].lower()
|
||||
|
||||
obj = getattr(self, m)
|
||||
|
||||
# Create tuples
|
||||
if isinstance(obj, structs.assimp_structs_as_tuple):
|
||||
setattr(target, name, make_tuple(obj))
|
||||
logger.debug(str(self) + ": Added array " + str(getattr(target, name)) + " as self." + name.lower())
|
||||
continue
|
||||
|
||||
if m.startswith('m'):
|
||||
|
||||
if name == "parent":
|
||||
setattr(target, name, parent)
|
||||
logger.debug("Added a parent as self." + name)
|
||||
continue
|
||||
|
||||
if helper.hasattr_silent(self, 'mNum' + m[1:]):
|
||||
|
||||
length = getattr(self, 'mNum' + m[1:])
|
||||
|
||||
# -> special case: properties are
|
||||
# stored as a dict.
|
||||
if m == 'mProperties':
|
||||
setattr(target, name, _get_properties(obj, length))
|
||||
continue
|
||||
|
||||
|
||||
if not length: # empty!
|
||||
setattr(target, name, [])
|
||||
logger.debug(str(self) + ": " + name + " is an empty list.")
|
||||
continue
|
||||
|
||||
|
||||
try:
|
||||
if obj._type_ in structs.assimp_structs_as_tuple:
|
||||
if numpy:
|
||||
setattr(target, name, numpy.array([make_tuple(obj[i]) for i in range(length)], dtype=numpy.float32))
|
||||
|
||||
logger.debug(str(self) + ": Added an array of numpy arrays (type "+ str(type(obj)) + ") as self." + name)
|
||||
else:
|
||||
setattr(target, name, [make_tuple(obj[i]) for i in range(length)])
|
||||
|
||||
logger.debug(str(self) + ": Added a list of lists (type "+ str(type(obj)) + ") as self." + name)
|
||||
|
||||
else:
|
||||
setattr(target, name, [obj[i] for i in range(length)]) #TODO: maybe not necessary to recreate an array?
|
||||
|
||||
logger.debug(str(self) + ": Added list of " + str(obj) + " " + name + " as self." + name + " (type: " + str(type(obj)) + ")")
|
||||
|
||||
# initialize array elements
|
||||
try:
|
||||
init = assimp_struct_inits[type(obj[0])]
|
||||
except KeyError:
|
||||
if _is_init_type(obj[0]):
|
||||
for e in getattr(target, name):
|
||||
call_init(e, target)
|
||||
else:
|
||||
for e in getattr(target, name):
|
||||
init(e)
|
||||
|
||||
|
||||
except IndexError:
|
||||
logger.error("in " + str(self) +" : mismatch between mNum" + name + " and the actual amount of data in m" + name + ". This may be due to version mismatch between libassimp and pyassimp. Quitting now.")
|
||||
sys.exit(1)
|
||||
|
||||
except ValueError as e:
|
||||
|
||||
logger.error("In " + str(self) + "->" + name + ": " + str(e) + ". Quitting now.")
|
||||
if "setting an array element with a sequence" in str(e):
|
||||
logger.error("Note that pyassimp does not currently "
|
||||
"support meshes with mixed triangles "
|
||||
"and quads. Try to load your mesh with"
|
||||
" a post-processing to triangulate your"
|
||||
" faces.")
|
||||
raise e
|
||||
|
||||
|
||||
|
||||
else: # starts with 'm' but not iterable
|
||||
setattr(target, name, obj)
|
||||
logger.debug("Added " + name + " as self." + name + " (type: " + str(type(obj)) + ")")
|
||||
|
||||
if _is_init_type(obj):
|
||||
call_init(obj, target)
|
||||
|
||||
if isinstance(self, structs.Mesh):
|
||||
_finalize_mesh(self, target)
|
||||
|
||||
if isinstance(self, structs.Texture):
|
||||
_finalize_texture(self, target)
|
||||
|
||||
if isinstance(self, structs.Metadata):
|
||||
_finalize_metadata(self, target)
|
||||
|
||||
|
||||
return self
|
||||
|
||||
|
||||
def pythonize_assimp(type, obj, scene):
|
||||
""" This method modify the Assimp data structures
|
||||
to make them easier to work with in Python.
|
||||
|
||||
Supported operations:
|
||||
- MESH: replace a list of mesh IDs by reference to these meshes
|
||||
- ADDTRANSFORMATION: add a reference to an object's transformation taken from their associated node.
|
||||
|
||||
:param type: the type of modification to operate (cf above)
|
||||
:param obj: the input object to modify
|
||||
:param scene: a reference to the whole scene
|
||||
"""
|
||||
|
||||
if type == "MESH":
|
||||
meshes = []
|
||||
for i in obj:
|
||||
meshes.append(scene.meshes[i])
|
||||
return meshes
|
||||
|
||||
if type == "ADDTRANSFORMATION":
|
||||
def getnode(node, name):
|
||||
if node.name == name: return node
|
||||
for child in node.children:
|
||||
n = getnode(child, name)
|
||||
if n: return n
|
||||
|
||||
node = getnode(scene.rootnode, obj.name)
|
||||
if not node:
|
||||
raise AssimpError("Object " + str(obj) + " has no associated node!")
|
||||
setattr(obj, "transformation", node.transformation)
|
||||
|
||||
def recur_pythonize(node, scene):
|
||||
'''
|
||||
Recursively call pythonize_assimp on
|
||||
nodes tree to apply several post-processing to
|
||||
pythonize the assimp datastructures.
|
||||
'''
|
||||
node.meshes = pythonize_assimp("MESH", node.meshes, scene)
|
||||
for mesh in node.meshes:
|
||||
mesh.material = scene.materials[mesh.materialindex]
|
||||
for cam in scene.cameras:
|
||||
pythonize_assimp("ADDTRANSFORMATION", cam, scene)
|
||||
for c in node.children:
|
||||
recur_pythonize(c, scene)
|
||||
|
||||
def load(filename,
|
||||
file_type = None,
|
||||
processing = postprocess.aiProcess_Triangulate):
|
||||
'''
|
||||
Load a model into a scene. On failure throws AssimpError.
|
||||
|
||||
Arguments
|
||||
---------
|
||||
filename: Either a filename or a file object to load model from.
|
||||
If a file object is passed, file_type MUST be specified
|
||||
Otherwise Assimp has no idea which importer to use.
|
||||
This is named 'filename' so as to not break legacy code.
|
||||
processing: assimp postprocessing parameters. Verbose keywords are imported
|
||||
from postprocessing, and the parameters can be combined bitwise to
|
||||
generate the final processing value. Note that the default value will
|
||||
triangulate quad faces. Example of generating other possible values:
|
||||
processing = (pyassimp.postprocess.aiProcess_Triangulate |
|
||||
pyassimp.postprocess.aiProcess_OptimizeMeshes)
|
||||
file_type: string of file extension, such as 'stl'
|
||||
|
||||
Returns
|
||||
---------
|
||||
Scene object with model data
|
||||
'''
|
||||
|
||||
if hasattr(filename, 'read'):
|
||||
# This is the case where a file object has been passed to load.
|
||||
# It is calling the following function:
|
||||
# const aiScene* aiImportFileFromMemory(const char* pBuffer,
|
||||
# unsigned int pLength,
|
||||
# unsigned int pFlags,
|
||||
# const char* pHint)
|
||||
if file_type is None:
|
||||
raise AssimpError('File type must be specified when passing file objects!')
|
||||
data = filename.read()
|
||||
model = _assimp_lib.load_mem(data,
|
||||
len(data),
|
||||
processing,
|
||||
file_type)
|
||||
else:
|
||||
# a filename string has been passed
|
||||
model = _assimp_lib.load(filename.encode(sys.getfilesystemencoding()), processing)
|
||||
|
||||
if not model:
|
||||
raise AssimpError('Could not import file!')
|
||||
scene = _init(model.contents)
|
||||
recur_pythonize(scene.rootnode, scene)
|
||||
return scene
|
||||
|
||||
def export(scene,
|
||||
filename,
|
||||
file_type = None,
|
||||
processing = postprocess.aiProcess_Triangulate):
|
||||
'''
|
||||
Export a scene. On failure throws AssimpError.
|
||||
|
||||
Arguments
|
||||
---------
|
||||
scene: scene to export.
|
||||
filename: Filename that the scene should be exported to.
|
||||
file_type: string of file exporter to use. For example "collada".
|
||||
processing: assimp postprocessing parameters. Verbose keywords are imported
|
||||
from postprocessing, and the parameters can be combined bitwise to
|
||||
generate the final processing value. Note that the default value will
|
||||
triangulate quad faces. Example of generating other possible values:
|
||||
processing = (pyassimp.postprocess.aiProcess_Triangulate |
|
||||
pyassimp.postprocess.aiProcess_OptimizeMeshes)
|
||||
|
||||
'''
|
||||
|
||||
exportStatus = _assimp_lib.export(ctypes.pointer(scene), file_type.encode("ascii"), filename.encode(sys.getfilesystemencoding()), processing)
|
||||
|
||||
if exportStatus != 0:
|
||||
raise AssimpError('Could not export scene!')
|
||||
|
||||
def export_blob(scene,
|
||||
file_type = None,
|
||||
processing = postprocess.aiProcess_Triangulate):
|
||||
'''
|
||||
Export a scene and return a blob in the correct format. On failure throws AssimpError.
|
||||
|
||||
Arguments
|
||||
---------
|
||||
scene: scene to export.
|
||||
file_type: string of file exporter to use. For example "collada".
|
||||
processing: assimp postprocessing parameters. Verbose keywords are imported
|
||||
from postprocessing, and the parameters can be combined bitwise to
|
||||
generate the final processing value. Note that the default value will
|
||||
triangulate quad faces. Example of generating other possible values:
|
||||
processing = (pyassimp.postprocess.aiProcess_Triangulate |
|
||||
pyassimp.postprocess.aiProcess_OptimizeMeshes)
|
||||
Returns
|
||||
---------
|
||||
Pointer to structs.ExportDataBlob
|
||||
'''
|
||||
exportBlobPtr = _assimp_lib.export_blob(ctypes.pointer(scene), file_type.encode("ascii"), processing)
|
||||
|
||||
if exportBlobPtr == 0:
|
||||
raise AssimpError('Could not export scene to blob!')
|
||||
return exportBlobPtr
|
||||
|
||||
def release(scene):
|
||||
_assimp_lib.release(ctypes.pointer(scene))
|
||||
|
||||
def _finalize_texture(tex, target):
|
||||
setattr(target, "achformathint", tex.achFormatHint)
|
||||
if numpy:
|
||||
data = numpy.array([make_tuple(getattr(tex, "pcData")[i]) for i in range(tex.mWidth * tex.mHeight)])
|
||||
else:
|
||||
data = [make_tuple(getattr(tex, "pcData")[i]) for i in range(tex.mWidth * tex.mHeight)]
|
||||
setattr(target, "data", data)
|
||||
|
||||
def _finalize_mesh(mesh, target):
|
||||
""" Building of meshes is a bit specific.
|
||||
|
||||
We override here the various datasets that can
|
||||
not be process as regular fields.
|
||||
|
||||
For instance, the length of the normals array is
|
||||
mNumVertices (no mNumNormals is available)
|
||||
"""
|
||||
nb_vertices = getattr(mesh, "mNumVertices")
|
||||
|
||||
def fill(name):
|
||||
mAttr = getattr(mesh, name)
|
||||
if numpy:
|
||||
if mAttr:
|
||||
data = numpy.array([make_tuple(getattr(mesh, name)[i]) for i in range(nb_vertices)], dtype=numpy.float32)
|
||||
setattr(target, name[1:].lower(), data)
|
||||
else:
|
||||
setattr(target, name[1:].lower(), numpy.array([], dtype="float32"))
|
||||
else:
|
||||
if mAttr:
|
||||
data = [make_tuple(getattr(mesh, name)[i]) for i in range(nb_vertices)]
|
||||
setattr(target, name[1:].lower(), data)
|
||||
else:
|
||||
setattr(target, name[1:].lower(), [])
|
||||
|
||||
def fillarray(name):
|
||||
mAttr = getattr(mesh, name)
|
||||
|
||||
data = []
|
||||
for index, mSubAttr in enumerate(mAttr):
|
||||
if mSubAttr:
|
||||
data.append([make_tuple(getattr(mesh, name)[index][i]) for i in range(nb_vertices)])
|
||||
|
||||
if numpy:
|
||||
setattr(target, name[1:].lower(), numpy.array(data, dtype=numpy.float32))
|
||||
else:
|
||||
setattr(target, name[1:].lower(), data)
|
||||
|
||||
fill("mNormals")
|
||||
fill("mTangents")
|
||||
fill("mBitangents")
|
||||
|
||||
fillarray("mColors")
|
||||
fillarray("mTextureCoords")
|
||||
|
||||
# prepare faces
|
||||
if numpy:
|
||||
faces = numpy.array([f.indices for f in target.faces], dtype=numpy.int32)
|
||||
else:
|
||||
faces = [f.indices for f in target.faces]
|
||||
setattr(target, 'faces', faces)
|
||||
|
||||
def _init_metadata_entry(entry):
|
||||
entry.type = entry.mType
|
||||
if entry.type == structs.MetadataEntry.AI_BOOL:
|
||||
entry.data = ctypes.cast(entry.mData, ctypes.POINTER(ctypes.c_bool)).contents.value
|
||||
elif entry.type == structs.MetadataEntry.AI_INT32:
|
||||
entry.data = ctypes.cast(entry.mData, ctypes.POINTER(ctypes.c_int32)).contents.value
|
||||
elif entry.type == structs.MetadataEntry.AI_UINT64:
|
||||
entry.data = ctypes.cast(entry.mData, ctypes.POINTER(ctypes.c_uint64)).contents.value
|
||||
elif entry.type == structs.MetadataEntry.AI_FLOAT:
|
||||
entry.data = ctypes.cast(entry.mData, ctypes.POINTER(ctypes.c_float)).contents.value
|
||||
elif entry.type == structs.MetadataEntry.AI_DOUBLE:
|
||||
entry.data = ctypes.cast(entry.mData, ctypes.POINTER(ctypes.c_double)).contents.value
|
||||
elif entry.type == structs.MetadataEntry.AI_AISTRING:
|
||||
assimp_string = ctypes.cast(entry.mData, ctypes.POINTER(structs.String)).contents
|
||||
entry.data = _convert_assimp_string(assimp_string)
|
||||
elif entry.type == structs.MetadataEntry.AI_AIVECTOR3D:
|
||||
assimp_vector = ctypes.cast(entry.mData, ctypes.POINTER(structs.Vector3D)).contents
|
||||
entry.data = make_tuple(assimp_vector)
|
||||
|
||||
return entry
|
||||
|
||||
def _finalize_metadata(metadata, target):
|
||||
""" Building the metadata object is a bit specific.
|
||||
|
||||
Firstly, there are two separate arrays: one with metadata keys and one
|
||||
with metadata values, and there are no corresponding mNum* attributes,
|
||||
so the C arrays are not converted to Python arrays using the generic
|
||||
code in the _init function.
|
||||
|
||||
Secondly, a metadata entry value has to be cast according to declared
|
||||
metadata entry type.
|
||||
"""
|
||||
length = metadata.mNumProperties
|
||||
setattr(target, 'keys', [str(_convert_assimp_string(metadata.mKeys[i])) for i in range(length)])
|
||||
setattr(target, 'values', [_init_metadata_entry(metadata.mValues[i]) for i in range(length)])
|
||||
|
||||
class PropertyGetter(dict):
|
||||
def __getitem__(self, key):
|
||||
semantic = 0
|
||||
if isinstance(key, tuple):
|
||||
key, semantic = key
|
||||
|
||||
return dict.__getitem__(self, (key, semantic))
|
||||
|
||||
def keys(self):
|
||||
for k in dict.keys(self):
|
||||
yield k[0]
|
||||
|
||||
def __iter__(self):
|
||||
return self.keys()
|
||||
|
||||
def items(self):
|
||||
for k, v in dict.items(self):
|
||||
yield k[0], v
|
||||
|
||||
|
||||
def _get_properties(properties, length):
|
||||
"""
|
||||
Convenience Function to get the material properties as a dict
|
||||
and values in a python format.
|
||||
"""
|
||||
result = {}
|
||||
#read all properties
|
||||
for p in [properties[i] for i in range(length)]:
|
||||
#the name
|
||||
p = p.contents
|
||||
key = str(_convert_assimp_string(p.mKey))
|
||||
key = (key.split('.')[1], p.mSemantic)
|
||||
|
||||
#the data
|
||||
if p.mType == 1:
|
||||
arr = ctypes.cast(p.mData,
|
||||
ctypes.POINTER(ctypes.c_float * int(p.mDataLength/ctypes.sizeof(ctypes.c_float)))
|
||||
).contents
|
||||
value = [x for x in arr]
|
||||
elif p.mType == 3: #string can't be an array
|
||||
value = _convert_assimp_string(ctypes.cast(p.mData, ctypes.POINTER(structs.MaterialPropertyString)).contents)
|
||||
|
||||
elif p.mType == 4:
|
||||
arr = ctypes.cast(p.mData,
|
||||
ctypes.POINTER(ctypes.c_int * int(p.mDataLength/ctypes.sizeof(ctypes.c_int)))
|
||||
).contents
|
||||
value = [x for x in arr]
|
||||
else:
|
||||
value = p.mData[:p.mDataLength]
|
||||
|
||||
if len(value) == 1:
|
||||
[value] = value
|
||||
|
||||
result[key] = value
|
||||
|
||||
return PropertyGetter(result)
|
||||
|
||||
def decompose_matrix(matrix):
|
||||
if not isinstance(matrix, structs.Matrix4x4):
|
||||
raise AssimpError("pyassimp.decompose_matrix failed: Not a Matrix4x4!")
|
||||
|
||||
scaling = structs.Vector3D()
|
||||
rotation = structs.Quaternion()
|
||||
position = structs.Vector3D()
|
||||
|
||||
_assimp_lib.dll.aiDecomposeMatrix(ctypes.pointer(matrix),
|
||||
ctypes.byref(scaling),
|
||||
ctypes.byref(rotation),
|
||||
ctypes.byref(position))
|
||||
return scaling._init(), rotation._init(), position._init()
|
||||
|
||||
11
thirdparty/assimp/port/PyAssimp/pyassimp/errors.py
vendored
Normal file
11
thirdparty/assimp/port/PyAssimp/pyassimp/errors.py
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
#-*- coding: UTF-8 -*-
|
||||
|
||||
"""
|
||||
All possible errors.
|
||||
"""
|
||||
|
||||
class AssimpError(BaseException):
|
||||
"""
|
||||
If an internal error occurs.
|
||||
"""
|
||||
pass
|
||||
41
thirdparty/assimp/port/PyAssimp/pyassimp/formats.py
vendored
Normal file
41
thirdparty/assimp/port/PyAssimp/pyassimp/formats.py
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
FORMATS = ["CSM",
|
||||
"LWS",
|
||||
"B3D",
|
||||
"COB",
|
||||
"PLY",
|
||||
"IFC",
|
||||
"OFF",
|
||||
"SMD",
|
||||
"IRRMESH",
|
||||
"3D",
|
||||
"DAE",
|
||||
"MDL",
|
||||
"HMP",
|
||||
"TER",
|
||||
"WRL",
|
||||
"XML",
|
||||
"NFF",
|
||||
"AC",
|
||||
"OBJ",
|
||||
"3DS",
|
||||
"STL",
|
||||
"IRR",
|
||||
"Q3O",
|
||||
"Q3D",
|
||||
"MS3D",
|
||||
"Q3S",
|
||||
"ZGL",
|
||||
"MD2",
|
||||
"X",
|
||||
"BLEND",
|
||||
"XGL",
|
||||
"MD5MESH",
|
||||
"MAX",
|
||||
"LXO",
|
||||
"DXF",
|
||||
"BVH",
|
||||
"LWO",
|
||||
"NDO"]
|
||||
|
||||
def available_formats():
|
||||
return FORMATS
|
||||
281
thirdparty/assimp/port/PyAssimp/pyassimp/helper.py
vendored
Normal file
281
thirdparty/assimp/port/PyAssimp/pyassimp/helper.py
vendored
Normal file
@@ -0,0 +1,281 @@
|
||||
#-*- coding: UTF-8 -*-
|
||||
|
||||
"""
|
||||
Some fancy helper functions.
|
||||
"""
|
||||
|
||||
import os
|
||||
import ctypes
|
||||
import operator
|
||||
|
||||
from distutils.sysconfig import get_python_lib
|
||||
import re
|
||||
import sys
|
||||
|
||||
try: import numpy
|
||||
except ImportError: numpy = None
|
||||
|
||||
import logging;logger = logging.getLogger("pyassimp")
|
||||
|
||||
from .errors import AssimpError
|
||||
|
||||
additional_dirs, ext_whitelist = [],[]
|
||||
|
||||
# populate search directories and lists of allowed file extensions
|
||||
# depending on the platform we're running on.
|
||||
if os.name=='posix':
|
||||
additional_dirs.append('./')
|
||||
additional_dirs.append('/usr/lib/')
|
||||
additional_dirs.append('/usr/lib/x86_64-linux-gnu/')
|
||||
additional_dirs.append('/usr/local/lib/')
|
||||
|
||||
if 'LD_LIBRARY_PATH' in os.environ:
|
||||
additional_dirs.extend([item for item in os.environ['LD_LIBRARY_PATH'].split(':') if item])
|
||||
|
||||
# check if running from anaconda.
|
||||
if "conda" or "continuum" in sys.version.lower():
|
||||
cur_path = get_python_lib()
|
||||
pattern = re.compile('.*\/lib\/')
|
||||
conda_lib = pattern.match(cur_path).group()
|
||||
logger.info("Adding Anaconda lib path:"+ conda_lib)
|
||||
additional_dirs.append(conda_lib)
|
||||
|
||||
# note - this won't catch libassimp.so.N.n, but
|
||||
# currently there's always a symlink called
|
||||
# libassimp.so in /usr/local/lib.
|
||||
ext_whitelist.append('.so')
|
||||
# libassimp.dylib in /usr/local/lib
|
||||
ext_whitelist.append('.dylib')
|
||||
|
||||
elif os.name=='nt':
|
||||
ext_whitelist.append('.dll')
|
||||
path_dirs = os.environ['PATH'].split(';')
|
||||
additional_dirs.extend(path_dirs)
|
||||
|
||||
def vec2tuple(x):
|
||||
""" Converts a VECTOR3D to a Tuple """
|
||||
return (x.x, x.y, x.z)
|
||||
|
||||
def transform(vector3, matrix4x4):
|
||||
""" Apply a transformation matrix on a 3D vector.
|
||||
|
||||
:param vector3: array with 3 elements
|
||||
:param matrix4x4: 4x4 matrix
|
||||
"""
|
||||
if numpy:
|
||||
return numpy.dot(matrix4x4, numpy.append(vector3, 1.))
|
||||
else:
|
||||
m0,m1,m2,m3 = matrix4x4; x,y,z = vector3
|
||||
return [
|
||||
m0[0]*x + m0[1]*y + m0[2]*z + m0[3],
|
||||
m1[0]*x + m1[1]*y + m1[2]*z + m1[3],
|
||||
m2[0]*x + m2[1]*y + m2[2]*z + m2[3],
|
||||
m3[0]*x + m3[1]*y + m3[2]*z + m3[3]
|
||||
]
|
||||
|
||||
def _inv(matrix4x4):
|
||||
m0,m1,m2,m3 = matrix4x4
|
||||
|
||||
det = m0[3]*m1[2]*m2[1]*m3[0] - m0[2]*m1[3]*m2[1]*m3[0] - \
|
||||
m0[3]*m1[1]*m2[2]*m3[0] + m0[1]*m1[3]*m2[2]*m3[0] + \
|
||||
m0[2]*m1[1]*m2[3]*m3[0] - m0[1]*m1[2]*m2[3]*m3[0] - \
|
||||
m0[3]*m1[2]*m2[0]*m3[1] + m0[2]*m1[3]*m2[0]*m3[1] + \
|
||||
m0[3]*m1[0]*m2[2]*m3[1] - m0[0]*m1[3]*m2[2]*m3[1] - \
|
||||
m0[2]*m1[0]*m2[3]*m3[1] + m0[0]*m1[2]*m2[3]*m3[1] + \
|
||||
m0[3]*m1[1]*m2[0]*m3[2] - m0[1]*m1[3]*m2[0]*m3[2] - \
|
||||
m0[3]*m1[0]*m2[1]*m3[2] + m0[0]*m1[3]*m2[1]*m3[2] + \
|
||||
m0[1]*m1[0]*m2[3]*m3[2] - m0[0]*m1[1]*m2[3]*m3[2] - \
|
||||
m0[2]*m1[1]*m2[0]*m3[3] + m0[1]*m1[2]*m2[0]*m3[3] + \
|
||||
m0[2]*m1[0]*m2[1]*m3[3] - m0[0]*m1[2]*m2[1]*m3[3] - \
|
||||
m0[1]*m1[0]*m2[2]*m3[3] + m0[0]*m1[1]*m2[2]*m3[3]
|
||||
|
||||
return[[( m1[2]*m2[3]*m3[1] - m1[3]*m2[2]*m3[1] + m1[3]*m2[1]*m3[2] - m1[1]*m2[3]*m3[2] - m1[2]*m2[1]*m3[3] + m1[1]*m2[2]*m3[3]) /det,
|
||||
( m0[3]*m2[2]*m3[1] - m0[2]*m2[3]*m3[1] - m0[3]*m2[1]*m3[2] + m0[1]*m2[3]*m3[2] + m0[2]*m2[1]*m3[3] - m0[1]*m2[2]*m3[3]) /det,
|
||||
( m0[2]*m1[3]*m3[1] - m0[3]*m1[2]*m3[1] + m0[3]*m1[1]*m3[2] - m0[1]*m1[3]*m3[2] - m0[2]*m1[1]*m3[3] + m0[1]*m1[2]*m3[3]) /det,
|
||||
( m0[3]*m1[2]*m2[1] - m0[2]*m1[3]*m2[1] - m0[3]*m1[1]*m2[2] + m0[1]*m1[3]*m2[2] + m0[2]*m1[1]*m2[3] - m0[1]*m1[2]*m2[3]) /det],
|
||||
[( m1[3]*m2[2]*m3[0] - m1[2]*m2[3]*m3[0] - m1[3]*m2[0]*m3[2] + m1[0]*m2[3]*m3[2] + m1[2]*m2[0]*m3[3] - m1[0]*m2[2]*m3[3]) /det,
|
||||
( m0[2]*m2[3]*m3[0] - m0[3]*m2[2]*m3[0] + m0[3]*m2[0]*m3[2] - m0[0]*m2[3]*m3[2] - m0[2]*m2[0]*m3[3] + m0[0]*m2[2]*m3[3]) /det,
|
||||
( m0[3]*m1[2]*m3[0] - m0[2]*m1[3]*m3[0] - m0[3]*m1[0]*m3[2] + m0[0]*m1[3]*m3[2] + m0[2]*m1[0]*m3[3] - m0[0]*m1[2]*m3[3]) /det,
|
||||
( m0[2]*m1[3]*m2[0] - m0[3]*m1[2]*m2[0] + m0[3]*m1[0]*m2[2] - m0[0]*m1[3]*m2[2] - m0[2]*m1[0]*m2[3] + m0[0]*m1[2]*m2[3]) /det],
|
||||
[( m1[1]*m2[3]*m3[0] - m1[3]*m2[1]*m3[0] + m1[3]*m2[0]*m3[1] - m1[0]*m2[3]*m3[1] - m1[1]*m2[0]*m3[3] + m1[0]*m2[1]*m3[3]) /det,
|
||||
( m0[3]*m2[1]*m3[0] - m0[1]*m2[3]*m3[0] - m0[3]*m2[0]*m3[1] + m0[0]*m2[3]*m3[1] + m0[1]*m2[0]*m3[3] - m0[0]*m2[1]*m3[3]) /det,
|
||||
( m0[1]*m1[3]*m3[0] - m0[3]*m1[1]*m3[0] + m0[3]*m1[0]*m3[1] - m0[0]*m1[3]*m3[1] - m0[1]*m1[0]*m3[3] + m0[0]*m1[1]*m3[3]) /det,
|
||||
( m0[3]*m1[1]*m2[0] - m0[1]*m1[3]*m2[0] - m0[3]*m1[0]*m2[1] + m0[0]*m1[3]*m2[1] + m0[1]*m1[0]*m2[3] - m0[0]*m1[1]*m2[3]) /det],
|
||||
[( m1[2]*m2[1]*m3[0] - m1[1]*m2[2]*m3[0] - m1[2]*m2[0]*m3[1] + m1[0]*m2[2]*m3[1] + m1[1]*m2[0]*m3[2] - m1[0]*m2[1]*m3[2]) /det,
|
||||
( m0[1]*m2[2]*m3[0] - m0[2]*m2[1]*m3[0] + m0[2]*m2[0]*m3[1] - m0[0]*m2[2]*m3[1] - m0[1]*m2[0]*m3[2] + m0[0]*m2[1]*m3[2]) /det,
|
||||
( m0[2]*m1[1]*m3[0] - m0[1]*m1[2]*m3[0] - m0[2]*m1[0]*m3[1] + m0[0]*m1[2]*m3[1] + m0[1]*m1[0]*m3[2] - m0[0]*m1[1]*m3[2]) /det,
|
||||
( m0[1]*m1[2]*m2[0] - m0[2]*m1[1]*m2[0] + m0[2]*m1[0]*m2[1] - m0[0]*m1[2]*m2[1] - m0[1]*m1[0]*m2[2] + m0[0]*m1[1]*m2[2]) /det]]
|
||||
|
||||
def get_bounding_box(scene):
|
||||
bb_min = [1e10, 1e10, 1e10] # x,y,z
|
||||
bb_max = [-1e10, -1e10, -1e10] # x,y,z
|
||||
inv = numpy.linalg.inv if numpy else _inv
|
||||
return get_bounding_box_for_node(scene.rootnode, bb_min, bb_max, inv(scene.rootnode.transformation))
|
||||
|
||||
def get_bounding_box_for_node(node, bb_min, bb_max, transformation):
|
||||
|
||||
if numpy:
|
||||
transformation = numpy.dot(transformation, node.transformation)
|
||||
else:
|
||||
t0,t1,t2,t3 = transformation
|
||||
T0,T1,T2,T3 = node.transformation
|
||||
transformation = [ [
|
||||
t0[0]*T0[0] + t0[1]*T1[0] + t0[2]*T2[0] + t0[3]*T3[0],
|
||||
t0[0]*T0[1] + t0[1]*T1[1] + t0[2]*T2[1] + t0[3]*T3[1],
|
||||
t0[0]*T0[2] + t0[1]*T1[2] + t0[2]*T2[2] + t0[3]*T3[2],
|
||||
t0[0]*T0[3] + t0[1]*T1[3] + t0[2]*T2[3] + t0[3]*T3[3]
|
||||
],[
|
||||
t1[0]*T0[0] + t1[1]*T1[0] + t1[2]*T2[0] + t1[3]*T3[0],
|
||||
t1[0]*T0[1] + t1[1]*T1[1] + t1[2]*T2[1] + t1[3]*T3[1],
|
||||
t1[0]*T0[2] + t1[1]*T1[2] + t1[2]*T2[2] + t1[3]*T3[2],
|
||||
t1[0]*T0[3] + t1[1]*T1[3] + t1[2]*T2[3] + t1[3]*T3[3]
|
||||
],[
|
||||
t2[0]*T0[0] + t2[1]*T1[0] + t2[2]*T2[0] + t2[3]*T3[0],
|
||||
t2[0]*T0[1] + t2[1]*T1[1] + t2[2]*T2[1] + t2[3]*T3[1],
|
||||
t2[0]*T0[2] + t2[1]*T1[2] + t2[2]*T2[2] + t2[3]*T3[2],
|
||||
t2[0]*T0[3] + t2[1]*T1[3] + t2[2]*T2[3] + t2[3]*T3[3]
|
||||
],[
|
||||
t3[0]*T0[0] + t3[1]*T1[0] + t3[2]*T2[0] + t3[3]*T3[0],
|
||||
t3[0]*T0[1] + t3[1]*T1[1] + t3[2]*T2[1] + t3[3]*T3[1],
|
||||
t3[0]*T0[2] + t3[1]*T1[2] + t3[2]*T2[2] + t3[3]*T3[2],
|
||||
t3[0]*T0[3] + t3[1]*T1[3] + t3[2]*T2[3] + t3[3]*T3[3]
|
||||
] ]
|
||||
|
||||
for mesh in node.meshes:
|
||||
for v in mesh.vertices:
|
||||
v = transform(v, transformation)
|
||||
bb_min[0] = min(bb_min[0], v[0])
|
||||
bb_min[1] = min(bb_min[1], v[1])
|
||||
bb_min[2] = min(bb_min[2], v[2])
|
||||
bb_max[0] = max(bb_max[0], v[0])
|
||||
bb_max[1] = max(bb_max[1], v[1])
|
||||
bb_max[2] = max(bb_max[2], v[2])
|
||||
|
||||
|
||||
for child in node.children:
|
||||
bb_min, bb_max = get_bounding_box_for_node(child, bb_min, bb_max, transformation)
|
||||
|
||||
return bb_min, bb_max
|
||||
|
||||
def try_load_functions(library_path, dll):
|
||||
'''
|
||||
Try to bind to aiImportFile and aiReleaseImport
|
||||
|
||||
Arguments
|
||||
---------
|
||||
library_path: path to current lib
|
||||
dll: ctypes handle to library
|
||||
|
||||
Returns
|
||||
---------
|
||||
If unsuccessful:
|
||||
None
|
||||
If successful:
|
||||
Tuple containing (library_path,
|
||||
load from filename function,
|
||||
load from memory function,
|
||||
export to filename function,
|
||||
export to blob function,
|
||||
release function,
|
||||
ctypes handle to assimp library)
|
||||
'''
|
||||
|
||||
try:
|
||||
load = dll.aiImportFile
|
||||
release = dll.aiReleaseImport
|
||||
load_mem = dll.aiImportFileFromMemory
|
||||
export = dll.aiExportScene
|
||||
export2blob = dll.aiExportSceneToBlob
|
||||
except AttributeError:
|
||||
#OK, this is a library, but it doesn't have the functions we need
|
||||
return None
|
||||
|
||||
# library found!
|
||||
from .structs import Scene, ExportDataBlob
|
||||
load.restype = ctypes.POINTER(Scene)
|
||||
load_mem.restype = ctypes.POINTER(Scene)
|
||||
export2blob.restype = ctypes.POINTER(ExportDataBlob)
|
||||
return (library_path, load, load_mem, export, export2blob, release, dll)
|
||||
|
||||
def search_library():
|
||||
'''
|
||||
Loads the assimp library.
|
||||
Throws exception AssimpError if no library_path is found
|
||||
|
||||
Returns: tuple, (load from filename function,
|
||||
load from memory function,
|
||||
export to filename function,
|
||||
export to blob function,
|
||||
release function,
|
||||
dll)
|
||||
'''
|
||||
#this path
|
||||
folder = os.path.dirname(__file__)
|
||||
|
||||
# silence 'DLL not found' message boxes on win
|
||||
try:
|
||||
ctypes.windll.kernel32.SetErrorMode(0x8007)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
candidates = []
|
||||
# test every file
|
||||
for curfolder in [folder]+additional_dirs:
|
||||
if os.path.isdir(curfolder):
|
||||
for filename in os.listdir(curfolder):
|
||||
# our minimum requirement for candidates is that
|
||||
# they should contain 'assimp' somewhere in
|
||||
# their name
|
||||
if filename.lower().find('assimp')==-1 :
|
||||
continue
|
||||
is_out=1
|
||||
for et in ext_whitelist:
|
||||
if et in filename.lower():
|
||||
is_out=0
|
||||
break
|
||||
if is_out:
|
||||
continue
|
||||
|
||||
library_path = os.path.join(curfolder, filename)
|
||||
logger.debug('Try ' + library_path)
|
||||
try:
|
||||
dll = ctypes.cdll.LoadLibrary(library_path)
|
||||
except Exception as e:
|
||||
logger.warning(str(e))
|
||||
# OK, this except is evil. But different OSs will throw different
|
||||
# errors. So just ignore any errors.
|
||||
continue
|
||||
# see if the functions we need are in the dll
|
||||
loaded = try_load_functions(library_path, dll)
|
||||
if loaded: candidates.append(loaded)
|
||||
|
||||
if not candidates:
|
||||
# no library found
|
||||
raise AssimpError("assimp library not found")
|
||||
else:
|
||||
# get the newest library_path
|
||||
candidates = map(lambda x: (os.lstat(x[0])[-2], x), candidates)
|
||||
res = max(candidates, key=operator.itemgetter(0))[1]
|
||||
logger.debug('Using assimp library located at ' + res[0])
|
||||
|
||||
# XXX: if there are 1000 dll/so files containing 'assimp'
|
||||
# in their name, do we have all of them in our address
|
||||
# space now until gc kicks in?
|
||||
|
||||
# XXX: take version postfix of the .so on linux?
|
||||
return res[1:]
|
||||
|
||||
def hasattr_silent(object, name):
|
||||
"""
|
||||
Calls hasttr() with the given parameters and preserves the legacy (pre-Python 3.2)
|
||||
functionality of silently catching exceptions.
|
||||
|
||||
Returns the result of hasatter() or False if an exception was raised.
|
||||
"""
|
||||
|
||||
try:
|
||||
if not object:
|
||||
return False
|
||||
return hasattr(object, name)
|
||||
except AttributeError:
|
||||
return False
|
||||
89
thirdparty/assimp/port/PyAssimp/pyassimp/material.py
vendored
Normal file
89
thirdparty/assimp/port/PyAssimp/pyassimp/material.py
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
# Dummy value.
|
||||
#
|
||||
# No texture, but the value to be used as 'texture semantic'
|
||||
# (#aiMaterialProperty::mSemantic) for all material properties
|
||||
# # not* related to textures.
|
||||
#
|
||||
aiTextureType_NONE = 0x0
|
||||
|
||||
# The texture is combined with the result of the diffuse
|
||||
# lighting equation.
|
||||
#
|
||||
aiTextureType_DIFFUSE = 0x1
|
||||
|
||||
# The texture is combined with the result of the specular
|
||||
# lighting equation.
|
||||
#
|
||||
aiTextureType_SPECULAR = 0x2
|
||||
|
||||
# The texture is combined with the result of the ambient
|
||||
# lighting equation.
|
||||
#
|
||||
aiTextureType_AMBIENT = 0x3
|
||||
|
||||
# The texture is added to the result of the lighting
|
||||
# calculation. It isn't influenced by incoming light.
|
||||
#
|
||||
aiTextureType_EMISSIVE = 0x4
|
||||
|
||||
# The texture is a height map.
|
||||
#
|
||||
# By convention, higher gray-scale values stand for
|
||||
# higher elevations from the base height.
|
||||
#
|
||||
aiTextureType_HEIGHT = 0x5
|
||||
|
||||
# The texture is a (tangent space) normal-map.
|
||||
#
|
||||
# Again, there are several conventions for tangent-space
|
||||
# normal maps. Assimp does (intentionally) not
|
||||
# distinguish here.
|
||||
#
|
||||
aiTextureType_NORMALS = 0x6
|
||||
|
||||
# The texture defines the glossiness of the material.
|
||||
#
|
||||
# The glossiness is in fact the exponent of the specular
|
||||
# (phong) lighting equation. Usually there is a conversion
|
||||
# function defined to map the linear color values in the
|
||||
# texture to a suitable exponent. Have fun.
|
||||
#
|
||||
aiTextureType_SHININESS = 0x7
|
||||
|
||||
# The texture defines per-pixel opacity.
|
||||
#
|
||||
# Usually 'white' means opaque and 'black' means
|
||||
# 'transparency'. Or quite the opposite. Have fun.
|
||||
#
|
||||
aiTextureType_OPACITY = 0x8
|
||||
|
||||
# Displacement texture
|
||||
#
|
||||
# The exact purpose and format is application-dependent.
|
||||
# Higher color values stand for higher vertex displacements.
|
||||
#
|
||||
aiTextureType_DISPLACEMENT = 0x9
|
||||
|
||||
# Lightmap texture (aka Ambient Occlusion)
|
||||
#
|
||||
# Both 'Lightmaps' and dedicated 'ambient occlusion maps' are
|
||||
# covered by this material property. The texture contains a
|
||||
# scaling value for the final color value of a pixel. Its
|
||||
# intensity is not affected by incoming light.
|
||||
#
|
||||
aiTextureType_LIGHTMAP = 0xA
|
||||
|
||||
# Reflection texture
|
||||
#
|
||||
# Contains the color of a perfect mirror reflection.
|
||||
# Rarely used, almost never for real-time applications.
|
||||
#
|
||||
aiTextureType_REFLECTION = 0xB
|
||||
|
||||
# Unknown texture
|
||||
#
|
||||
# A texture reference that does not match any of the definitions
|
||||
# above is considered to be 'unknown'. It is still imported
|
||||
# but is excluded from any further postprocessing.
|
||||
#
|
||||
aiTextureType_UNKNOWN = 0xC
|
||||
530
thirdparty/assimp/port/PyAssimp/pyassimp/postprocess.py
vendored
Normal file
530
thirdparty/assimp/port/PyAssimp/pyassimp/postprocess.py
vendored
Normal file
@@ -0,0 +1,530 @@
|
||||
# <hr>Calculates the tangents and bitangents for the imported meshes.
|
||||
#
|
||||
# Does nothing if a mesh does not have normals. You might want this post
|
||||
# processing step to be executed if you plan to use tangent space calculations
|
||||
# such as normal mapping applied to the meshes. There's a config setting,
|
||||
# <tt>#AI_CONFIG_PP_CT_MAX_SMOOTHING_ANGLE<tt>, which allows you to specify
|
||||
# a maximum smoothing angle for the algorithm. However, usually you'll
|
||||
# want to leave it at the default value.
|
||||
#
|
||||
aiProcess_CalcTangentSpace = 0x1
|
||||
|
||||
## <hr>Identifies and joins identical vertex data sets within all
|
||||
# imported meshes.
|
||||
#
|
||||
# After this step is run, each mesh contains unique vertices,
|
||||
# so a vertex may be used by multiple faces. You usually want
|
||||
# to use this post processing step. If your application deals with
|
||||
# indexed geometry, this step is compulsory or you'll just waste rendering
|
||||
# time. <b>If this flag is not specified<b>, no vertices are referenced by
|
||||
# more than one face and <b>no index buffer is required<b> for rendering.
|
||||
#
|
||||
aiProcess_JoinIdenticalVertices = 0x2
|
||||
|
||||
## <hr>Converts all the imported data to a left-handed coordinate space.
|
||||
#
|
||||
# By default the data is returned in a right-handed coordinate space (which
|
||||
# OpenGL prefers). In this space, +X points to the right,
|
||||
# +Z points towards the viewer, and +Y points upwards. In the DirectX
|
||||
# coordinate space +X points to the right, +Y points upwards, and +Z points
|
||||
# away from the viewer.
|
||||
#
|
||||
# You'll probably want to consider this flag if you use Direct3D for
|
||||
# rendering. The #aiProcess_ConvertToLeftHanded flag supersedes this
|
||||
# setting and bundles all conversions typically required for D3D-based
|
||||
# applications.
|
||||
#
|
||||
aiProcess_MakeLeftHanded = 0x4
|
||||
|
||||
## <hr>Triangulates all faces of all meshes.
|
||||
#
|
||||
# By default the imported mesh data might contain faces with more than 3
|
||||
# indices. For rendering you'll usually want all faces to be triangles.
|
||||
# This post processing step splits up faces with more than 3 indices into
|
||||
# triangles. Line and point primitives are #not# modified! If you want
|
||||
# 'triangles only' with no other kinds of primitives, try the following
|
||||
# solution:
|
||||
# <ul>
|
||||
# <li>Specify both #aiProcess_Triangulate and #aiProcess_SortByPType <li>
|
||||
# <li>Ignore all point and line meshes when you process assimp's output<li>
|
||||
# <ul>
|
||||
#
|
||||
aiProcess_Triangulate = 0x8
|
||||
|
||||
## <hr>Removes some parts of the data structure (animations, materials,
|
||||
# light sources, cameras, textures, vertex components).
|
||||
#
|
||||
# The components to be removed are specified in a separate
|
||||
# configuration option, <tt>#AI_CONFIG_PP_RVC_FLAGS<tt>. This is quite useful
|
||||
# if you don't need all parts of the output structure. Vertex colors
|
||||
# are rarely used today for example... Calling this step to remove unneeded
|
||||
# data from the pipeline as early as possible results in increased
|
||||
# performance and a more optimized output data structure.
|
||||
# This step is also useful if you want to force Assimp to recompute
|
||||
# normals or tangents. The corresponding steps don't recompute them if
|
||||
# they're already there (loaded from the source asset). By using this
|
||||
# step you can make sure they are NOT there.
|
||||
#
|
||||
# This flag is a poor one, mainly because its purpose is usually
|
||||
# misunderstood. Consider the following case: a 3D model has been exported
|
||||
# from a CAD app, and it has per-face vertex colors. Vertex positions can't be
|
||||
# shared, thus the #aiProcess_JoinIdenticalVertices step fails to
|
||||
# optimize the data because of these nasty little vertex colors.
|
||||
# Most apps don't even process them, so it's all for nothing. By using
|
||||
# this step, unneeded components are excluded as early as possible
|
||||
# thus opening more room for internal optimizations.
|
||||
#
|
||||
aiProcess_RemoveComponent = 0x10
|
||||
|
||||
## <hr>Generates normals for all faces of all meshes.
|
||||
#
|
||||
# This is ignored if normals are already there at the time this flag
|
||||
# is evaluated. Model importers try to load them from the source file, so
|
||||
# they're usually already there. Face normals are shared between all points
|
||||
# of a single face, so a single point can have multiple normals, which
|
||||
# forces the library to duplicate vertices in some cases.
|
||||
# #aiProcess_JoinIdenticalVertices is #senseless# then.
|
||||
#
|
||||
# This flag may not be specified together with #aiProcess_GenSmoothNormals.
|
||||
#
|
||||
aiProcess_GenNormals = 0x20
|
||||
|
||||
## <hr>Generates smooth normals for all vertices in the mesh.
|
||||
#
|
||||
# This is ignored if normals are already there at the time this flag
|
||||
# is evaluated. Model importers try to load them from the source file, so
|
||||
# they're usually already there.
|
||||
#
|
||||
# This flag may not be specified together with
|
||||
# #aiProcess_GenNormals. There's a configuration option,
|
||||
# <tt>#AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE<tt> which allows you to specify
|
||||
# an angle maximum for the normal smoothing algorithm. Normals exceeding
|
||||
# this limit are not smoothed, resulting in a 'hard' seam between two faces.
|
||||
# Using a decent angle here (e.g. 80 degrees) results in very good visual
|
||||
# appearance.
|
||||
#
|
||||
aiProcess_GenSmoothNormals = 0x40
|
||||
|
||||
## <hr>Splits large meshes into smaller sub-meshes.
|
||||
#
|
||||
# This is quite useful for real-time rendering, where the number of triangles
|
||||
# which can be maximally processed in a single draw-call is limited
|
||||
# by the video driverhardware. The maximum vertex buffer is usually limited
|
||||
# too. Both requirements can be met with this step: you may specify both a
|
||||
# triangle and vertex limit for a single mesh.
|
||||
#
|
||||
# The split limits can (and should!) be set through the
|
||||
# <tt>#AI_CONFIG_PP_SLM_VERTEX_LIMIT<tt> and <tt>#AI_CONFIG_PP_SLM_TRIANGLE_LIMIT<tt>
|
||||
# settings. The default values are <tt>#AI_SLM_DEFAULT_MAX_VERTICES<tt> and
|
||||
# <tt>#AI_SLM_DEFAULT_MAX_TRIANGLES<tt>.
|
||||
#
|
||||
# Note that splitting is generally a time-consuming task, but only if there's
|
||||
# something to split. The use of this step is recommended for most users.
|
||||
#
|
||||
aiProcess_SplitLargeMeshes = 0x80
|
||||
|
||||
## <hr>Removes the node graph and pre-transforms all vertices with
|
||||
# the local transformation matrices of their nodes.
|
||||
#
|
||||
# The output scene still contains nodes, however there is only a
|
||||
# root node with children, each one referencing only one mesh,
|
||||
# and each mesh referencing one material. For rendering, you can
|
||||
# simply render all meshes in order - you don't need to pay
|
||||
# attention to local transformations and the node hierarchy.
|
||||
# Animations are removed during this step.
|
||||
# This step is intended for applications without a scenegraph.
|
||||
# The step CAN cause some problems: if e.g. a mesh of the asset
|
||||
# contains normals and another, using the same material index, does not,
|
||||
# they will be brought together, but the first meshes's part of
|
||||
# the normal list is zeroed. However, these artifacts are rare.
|
||||
# @note The <tt>#AI_CONFIG_PP_PTV_NORMALIZE<tt> configuration property
|
||||
# can be set to normalize the scene's spatial dimension to the -1...1
|
||||
# range.
|
||||
#
|
||||
aiProcess_PreTransformVertices = 0x100
|
||||
|
||||
## <hr>Limits the number of bones simultaneously affecting a single vertex
|
||||
# to a maximum value.
|
||||
#
|
||||
# If any vertex is affected by more than the maximum number of bones, the least
|
||||
# important vertex weights are removed and the remaining vertex weights are
|
||||
# renormalized so that the weights still sum up to 1.
|
||||
# The default bone weight limit is 4 (defined as <tt>#AI_LMW_MAX_WEIGHTS<tt> in
|
||||
# config.h), but you can use the <tt>#AI_CONFIG_PP_LBW_MAX_WEIGHTS<tt> setting to
|
||||
# supply your own limit to the post processing step.
|
||||
#
|
||||
# If you intend to perform the skinning in hardware, this post processing
|
||||
# step might be of interest to you.
|
||||
#
|
||||
aiProcess_LimitBoneWeights = 0x200
|
||||
|
||||
## <hr>Validates the imported scene data structure.
|
||||
# This makes sure that all indices are valid, all animations and
|
||||
# bones are linked correctly, all material references are correct .. etc.
|
||||
#
|
||||
# It is recommended that you capture Assimp's log output if you use this flag,
|
||||
# so you can easily find out what's wrong if a file fails the
|
||||
# validation. The validator is quite strict and will find #all#
|
||||
# inconsistencies in the data structure... It is recommended that plugin
|
||||
# developers use it to debug their loaders. There are two types of
|
||||
# validation failures:
|
||||
# <ul>
|
||||
# <li>Error: There's something wrong with the imported data. Further
|
||||
# postprocessing is not possible and the data is not usable at all.
|
||||
# The import fails. #Importer::GetErrorString() or #aiGetErrorString()
|
||||
# carry the error message around.<li>
|
||||
# <li>Warning: There are some minor issues (e.g. 1000000 animation
|
||||
# keyframes with the same time), but further postprocessing and use
|
||||
# of the data structure is still safe. Warning details are written
|
||||
# to the log file, <tt>#AI_SCENE_FLAGS_VALIDATION_WARNING<tt> is set
|
||||
# in #aiScene::mFlags<li>
|
||||
# <ul>
|
||||
#
|
||||
# This post-processing step is not time-consuming. Its use is not
|
||||
# compulsory, but recommended.
|
||||
#
|
||||
aiProcess_ValidateDataStructure = 0x400
|
||||
|
||||
## <hr>Reorders triangles for better vertex cache locality.
|
||||
#
|
||||
# The step tries to improve the ACMR (average post-transform vertex cache
|
||||
# miss ratio) for all meshes. The implementation runs in O(n) and is
|
||||
# roughly based on the 'tipsify' algorithm (see <a href="
|
||||
# http:www.cs.princeton.edugfxpubsSander_2007_%3ETRtipsy.pdf">this
|
||||
# paper<a>).
|
||||
#
|
||||
# If you intend to render huge models in hardware, this step might
|
||||
# be of interest to you. The <tt>#AI_CONFIG_PP_ICL_PTCACHE_SIZE<tt>config
|
||||
# setting can be used to fine-tune the cache optimization.
|
||||
#
|
||||
aiProcess_ImproveCacheLocality = 0x800
|
||||
|
||||
## <hr>Searches for redundantunreferenced materials and removes them.
|
||||
#
|
||||
# This is especially useful in combination with the
|
||||
# #aiProcess_PretransformVertices and #aiProcess_OptimizeMeshes flags.
|
||||
# Both join small meshes with equal characteristics, but they can't do
|
||||
# their work if two meshes have different materials. Because several
|
||||
# material settings are lost during Assimp's import filters,
|
||||
# (and because many exporters don't check for redundant materials), huge
|
||||
# models often have materials which are are defined several times with
|
||||
# exactly the same settings.
|
||||
#
|
||||
# Several material settings not contributing to the final appearance of
|
||||
# a surface are ignored in all comparisons (e.g. the material name).
|
||||
# So, if you're passing additional information through the
|
||||
# content pipeline (probably using #magic# material names), don't
|
||||
# specify this flag. Alternatively take a look at the
|
||||
# <tt>#AI_CONFIG_PP_RRM_EXCLUDE_LIST<tt> setting.
|
||||
#
|
||||
aiProcess_RemoveRedundantMaterials = 0x1000
|
||||
|
||||
## <hr>This step tries to determine which meshes have normal vectors
|
||||
# that are facing inwards and inverts them.
|
||||
#
|
||||
# The algorithm is simple but effective:
|
||||
# the bounding box of all vertices + their normals is compared against
|
||||
# the volume of the bounding box of all vertices without their normals.
|
||||
# This works well for most objects, problems might occur with planar
|
||||
# surfaces. However, the step tries to filter such cases.
|
||||
# The step inverts all in-facing normals. Generally it is recommended
|
||||
# to enable this step, although the result is not always correct.
|
||||
#
|
||||
aiProcess_FixInfacingNormals = 0x2000
|
||||
|
||||
## <hr>This step splits meshes with more than one primitive type in
|
||||
# homogeneous sub-meshes.
|
||||
#
|
||||
# The step is executed after the triangulation step. After the step
|
||||
# returns, just one bit is set in aiMesh::mPrimitiveTypes. This is
|
||||
# especially useful for real-time rendering where point and line
|
||||
# primitives are often ignored or rendered separately.
|
||||
# You can use the <tt>#AI_CONFIG_PP_SBP_REMOVE<tt> option to specify which
|
||||
# primitive types you need. This can be used to easily exclude
|
||||
# lines and points, which are rarely used, from the import.
|
||||
#
|
||||
aiProcess_SortByPType = 0x8000
|
||||
|
||||
## <hr>This step searches all meshes for degenerate primitives and
|
||||
# converts them to proper lines or points.
|
||||
#
|
||||
# A face is 'degenerate' if one or more of its points are identical.
|
||||
# To have the degenerate stuff not only detected and collapsed but
|
||||
# removed, try one of the following procedures:
|
||||
# <br><b>1.<b> (if you support lines and points for rendering but don't
|
||||
# want the degenerates)<br>
|
||||
# <ul>
|
||||
# <li>Specify the #aiProcess_FindDegenerates flag.
|
||||
# <li>
|
||||
# <li>Set the <tt>AI_CONFIG_PP_FD_REMOVE<tt> option to 1. This will
|
||||
# cause the step to remove degenerate triangles from the import
|
||||
# as soon as they're detected. They won't pass any further
|
||||
# pipeline steps.
|
||||
# <li>
|
||||
# <ul>
|
||||
# <br><b>2.<b>(if you don't support lines and points at all)<br>
|
||||
# <ul>
|
||||
# <li>Specify the #aiProcess_FindDegenerates flag.
|
||||
# <li>
|
||||
# <li>Specify the #aiProcess_SortByPType flag. This moves line and
|
||||
# point primitives to separate meshes.
|
||||
# <li>
|
||||
# <li>Set the <tt>AI_CONFIG_PP_SBP_REMOVE<tt> option to
|
||||
# @code aiPrimitiveType_POINTS | aiPrimitiveType_LINES
|
||||
# @endcode to cause SortByPType to reject point
|
||||
# and line meshes from the scene.
|
||||
# <li>
|
||||
# <ul>
|
||||
# @note Degenerate polygons are not necessarily evil and that's why
|
||||
# they're not removed by default. There are several file formats which
|
||||
# don't support lines or points, and some exporters bypass the
|
||||
# format specification and write them as degenerate triangles instead.
|
||||
#
|
||||
aiProcess_FindDegenerates = 0x10000
|
||||
|
||||
## <hr>This step searches all meshes for invalid data, such as zeroed
|
||||
# normal vectors or invalid UV coords and removesfixes them. This is
|
||||
# intended to get rid of some common exporter errors.
|
||||
#
|
||||
# This is especially useful for normals. If they are invalid, and
|
||||
# the step recognizes this, they will be removed and can later
|
||||
# be recomputed, i.e. by the #aiProcess_GenSmoothNormals flag.<br>
|
||||
# The step will also remove meshes that are infinitely small and reduce
|
||||
# animation tracks consisting of hundreds if redundant keys to a single
|
||||
# key. The <tt>AI_CONFIG_PP_FID_ANIM_ACCURACY<tt> config property decides
|
||||
# the accuracy of the check for duplicate animation tracks.
|
||||
#
|
||||
aiProcess_FindInvalidData = 0x20000
|
||||
|
||||
## <hr>This step converts non-UV mappings (such as spherical or
|
||||
# cylindrical mapping) to proper texture coordinate channels.
|
||||
#
|
||||
# Most applications will support UV mapping only, so you will
|
||||
# probably want to specify this step in every case. Note that Assimp is not
|
||||
# always able to match the original mapping implementation of the
|
||||
# 3D app which produced a model perfectly. It's always better to let the
|
||||
# modelling app compute the UV channels - 3ds max, Maya, Blender,
|
||||
# LightWave, and Modo do this for example.
|
||||
#
|
||||
# @note If this step is not requested, you'll need to process the
|
||||
# <tt>#AI_MATKEY_MAPPING<tt> material property in order to display all assets
|
||||
# properly.
|
||||
#
|
||||
aiProcess_GenUVCoords = 0x40000
|
||||
|
||||
## <hr>This step applies per-texture UV transformations and bakes
|
||||
# them into stand-alone vtexture coordinate channels.
|
||||
#
|
||||
# UV transformations are specified per-texture - see the
|
||||
# <tt>#AI_MATKEY_UVTRANSFORM<tt> material key for more information.
|
||||
# This step processes all textures with
|
||||
# transformed input UV coordinates and generates a new (pre-transformed) UV channel
|
||||
# which replaces the old channel. Most applications won't support UV
|
||||
# transformations, so you will probably want to specify this step.
|
||||
#
|
||||
# @note UV transformations are usually implemented in real-time apps by
|
||||
# transforming texture coordinates at vertex shader stage with a 3x3
|
||||
# (homogenous) transformation matrix.
|
||||
#
|
||||
aiProcess_TransformUVCoords = 0x80000
|
||||
|
||||
## <hr>This step searches for duplicate meshes and replaces them
|
||||
# with references to the first mesh.
|
||||
#
|
||||
# This step takes a while, so don't use it if speed is a concern.
|
||||
# Its main purpose is to workaround the fact that many export
|
||||
# file formats don't support instanced meshes, so exporters need to
|
||||
# duplicate meshes. This step removes the duplicates again. Please
|
||||
# note that Assimp does not currently support per-node material
|
||||
# assignment to meshes, which means that identical meshes with
|
||||
# different materials are currently #not# joined, although this is
|
||||
# planned for future versions.
|
||||
#
|
||||
aiProcess_FindInstances = 0x100000
|
||||
|
||||
## <hr>A postprocessing step to reduce the number of meshes.
|
||||
#
|
||||
# This will, in fact, reduce the number of draw calls.
|
||||
#
|
||||
# This is a very effective optimization and is recommended to be used
|
||||
# together with #aiProcess_OptimizeGraph, if possible. The flag is fully
|
||||
# compatible with both #aiProcess_SplitLargeMeshes and #aiProcess_SortByPType.
|
||||
#
|
||||
aiProcess_OptimizeMeshes = 0x200000
|
||||
|
||||
|
||||
## <hr>A postprocessing step to optimize the scene hierarchy.
|
||||
#
|
||||
# Nodes without animations, bones, lights or cameras assigned are
|
||||
# collapsed and joined.
|
||||
#
|
||||
# Node names can be lost during this step. If you use special 'tag nodes'
|
||||
# to pass additional information through your content pipeline, use the
|
||||
# <tt>#AI_CONFIG_PP_OG_EXCLUDE_LIST<tt> setting to specify a list of node
|
||||
# names you want to be kept. Nodes matching one of the names in this list won't
|
||||
# be touched or modified.
|
||||
#
|
||||
# Use this flag with caution. Most simple files will be collapsed to a
|
||||
# single node, so complex hierarchies are usually completely lost. This is not
|
||||
# useful for editor environments, but probably a very effective
|
||||
# optimization if you just want to get the model data, convert it to your
|
||||
# own format, and render it as fast as possible.
|
||||
#
|
||||
# This flag is designed to be used with #aiProcess_OptimizeMeshes for best
|
||||
# results.
|
||||
#
|
||||
# @note 'Crappy' scenes with thousands of extremely small meshes packed
|
||||
# in deeply nested nodes exist for almost all file formats.
|
||||
# #aiProcess_OptimizeMeshes in combination with #aiProcess_OptimizeGraph
|
||||
# usually fixes them all and makes them renderable.
|
||||
#
|
||||
aiProcess_OptimizeGraph = 0x400000
|
||||
|
||||
## <hr>This step flips all UV coordinates along the y-axis and adjusts
|
||||
# material settings and bitangents accordingly.
|
||||
#
|
||||
# <b>Output UV coordinate system:<b>
|
||||
# @code
|
||||
# 0y|0y ---------- 1x|0y
|
||||
# | |
|
||||
# | |
|
||||
# | |
|
||||
# 0x|1y ---------- 1x|1y
|
||||
# @endcode
|
||||
#
|
||||
# You'll probably want to consider this flag if you use Direct3D for
|
||||
# rendering. The #aiProcess_ConvertToLeftHanded flag supersedes this
|
||||
# setting and bundles all conversions typically required for D3D-based
|
||||
# applications.
|
||||
#
|
||||
aiProcess_FlipUVs = 0x800000
|
||||
|
||||
## <hr>This step adjusts the output face winding order to be CW.
|
||||
#
|
||||
# The default face winding order is counter clockwise (CCW).
|
||||
#
|
||||
# <b>Output face order:<b>
|
||||
# @code
|
||||
# x2
|
||||
#
|
||||
# x0
|
||||
# x1
|
||||
# @endcode
|
||||
#
|
||||
aiProcess_FlipWindingOrder = 0x1000000
|
||||
|
||||
## <hr>This step splits meshes with many bones into sub-meshes so that each
|
||||
# su-bmesh has fewer or as many bones as a given limit.
|
||||
#
|
||||
aiProcess_SplitByBoneCount = 0x2000000
|
||||
|
||||
## <hr>This step removes bones losslessly or according to some threshold.
|
||||
#
|
||||
# In some cases (i.e. formats that require it) exporters are forced to
|
||||
# assign dummy bone weights to otherwise static meshes assigned to
|
||||
# animated meshes. Full, weight-based skinning is expensive while
|
||||
# animating nodes is extremely cheap, so this step is offered to clean up
|
||||
# the data in that regard.
|
||||
#
|
||||
# Use <tt>#AI_CONFIG_PP_DB_THRESHOLD<tt> to control this.
|
||||
# Use <tt>#AI_CONFIG_PP_DB_ALL_OR_NONE<tt> if you want bones removed if and
|
||||
# only if all bones within the scene qualify for removal.
|
||||
#
|
||||
aiProcess_Debone = 0x4000000
|
||||
|
||||
aiProcess_GenEntityMeshes = 0x100000
|
||||
aiProcess_OptimizeAnimations = 0x200000
|
||||
aiProcess_FixTexturePaths = 0x200000
|
||||
aiProcess_EmbedTextures = 0x10000000,
|
||||
|
||||
## @def aiProcess_ConvertToLeftHanded
|
||||
# @brief Shortcut flag for Direct3D-based applications.
|
||||
#
|
||||
# Supersedes the #aiProcess_MakeLeftHanded and #aiProcess_FlipUVs and
|
||||
# #aiProcess_FlipWindingOrder flags.
|
||||
# The output data matches Direct3D's conventions: left-handed geometry, upper-left
|
||||
# origin for UV coordinates and finally clockwise face order, suitable for CCW culling.
|
||||
#
|
||||
# @deprecated
|
||||
#
|
||||
aiProcess_ConvertToLeftHanded = ( \
|
||||
aiProcess_MakeLeftHanded | \
|
||||
aiProcess_FlipUVs | \
|
||||
aiProcess_FlipWindingOrder | \
|
||||
0 )
|
||||
|
||||
|
||||
## @def aiProcessPreset_TargetRealtimeUse_Fast
|
||||
# @brief Default postprocess configuration optimizing the data for real-time rendering.
|
||||
#
|
||||
# Applications would want to use this preset to load models on end-user PCs,
|
||||
# maybe for direct use in game.
|
||||
#
|
||||
# If you're using DirectX, don't forget to combine this value with
|
||||
# the #aiProcess_ConvertToLeftHanded step. If you don't support UV transformations
|
||||
# in your application apply the #aiProcess_TransformUVCoords step, too.
|
||||
# @note Please take the time to read the docs for the steps enabled by this preset.
|
||||
# Some of them offer further configurable properties, while some of them might not be of
|
||||
# use for you so it might be better to not specify them.
|
||||
#
|
||||
aiProcessPreset_TargetRealtime_Fast = ( \
|
||||
aiProcess_CalcTangentSpace | \
|
||||
aiProcess_GenNormals | \
|
||||
aiProcess_JoinIdenticalVertices | \
|
||||
aiProcess_Triangulate | \
|
||||
aiProcess_GenUVCoords | \
|
||||
aiProcess_SortByPType | \
|
||||
0 )
|
||||
|
||||
## @def aiProcessPreset_TargetRealtime_Quality
|
||||
# @brief Default postprocess configuration optimizing the data for real-time rendering.
|
||||
#
|
||||
# Unlike #aiProcessPreset_TargetRealtime_Fast, this configuration
|
||||
# performs some extra optimizations to improve rendering speed and
|
||||
# to minimize memory usage. It could be a good choice for a level editor
|
||||
# environment where import speed is not so important.
|
||||
#
|
||||
# If you're using DirectX, don't forget to combine this value with
|
||||
# the #aiProcess_ConvertToLeftHanded step. If you don't support UV transformations
|
||||
# in your application apply the #aiProcess_TransformUVCoords step, too.
|
||||
# @note Please take the time to read the docs for the steps enabled by this preset.
|
||||
# Some of them offer further configurable properties, while some of them might not be
|
||||
# of use for you so it might be better to not specify them.
|
||||
#
|
||||
aiProcessPreset_TargetRealtime_Quality = ( \
|
||||
aiProcess_CalcTangentSpace | \
|
||||
aiProcess_GenSmoothNormals | \
|
||||
aiProcess_JoinIdenticalVertices | \
|
||||
aiProcess_ImproveCacheLocality | \
|
||||
aiProcess_LimitBoneWeights | \
|
||||
aiProcess_RemoveRedundantMaterials | \
|
||||
aiProcess_SplitLargeMeshes | \
|
||||
aiProcess_Triangulate | \
|
||||
aiProcess_GenUVCoords | \
|
||||
aiProcess_SortByPType | \
|
||||
aiProcess_FindDegenerates | \
|
||||
aiProcess_FindInvalidData | \
|
||||
0 )
|
||||
|
||||
## @def aiProcessPreset_TargetRealtime_MaxQuality
|
||||
# @brief Default postprocess configuration optimizing the data for real-time rendering.
|
||||
#
|
||||
# This preset enables almost every optimization step to achieve perfectly
|
||||
# optimized data. It's your choice for level editor environments where import speed
|
||||
# is not important.
|
||||
#
|
||||
# If you're using DirectX, don't forget to combine this value with
|
||||
# the #aiProcess_ConvertToLeftHanded step. If you don't support UV transformations
|
||||
# in your application, apply the #aiProcess_TransformUVCoords step, too.
|
||||
# @note Please take the time to read the docs for the steps enabled by this preset.
|
||||
# Some of them offer further configurable properties, while some of them might not be
|
||||
# of use for you so it might be better to not specify them.
|
||||
#
|
||||
aiProcessPreset_TargetRealtime_MaxQuality = ( \
|
||||
aiProcessPreset_TargetRealtime_Quality | \
|
||||
aiProcess_FindInstances | \
|
||||
aiProcess_ValidateDataStructure | \
|
||||
aiProcess_OptimizeMeshes | \
|
||||
0 )
|
||||
|
||||
|
||||
1132
thirdparty/assimp/port/PyAssimp/pyassimp/structs.py
vendored
Normal file
1132
thirdparty/assimp/port/PyAssimp/pyassimp/structs.py
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1318
thirdparty/assimp/port/PyAssimp/scripts/3d_viewer.py
vendored
Executable file
1318
thirdparty/assimp/port/PyAssimp/scripts/3d_viewer.py
vendored
Executable file
File diff suppressed because it is too large
Load Diff
1316
thirdparty/assimp/port/PyAssimp/scripts/3d_viewer_py3.py
vendored
Executable file
1316
thirdparty/assimp/port/PyAssimp/scripts/3d_viewer_py3.py
vendored
Executable file
File diff suppressed because it is too large
Load Diff
13
thirdparty/assimp/port/PyAssimp/scripts/README.md
vendored
Normal file
13
thirdparty/assimp/port/PyAssimp/scripts/README.md
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
pyassimp examples
|
||||
=================
|
||||
|
||||
- `sample.py`: shows how to load a model with pyassimp, and display some statistics.
|
||||
- `3d_viewer.py`: an OpenGL 3D viewer that requires shaders
|
||||
- `fixed_pipeline_3d_viewer`: an OpenGL 3D viewer using the old fixed-pipeline.
|
||||
Only for illustration example. Base new projects on `3d_viewer.py`.
|
||||
|
||||
|
||||
Requirements for the 3D viewers:
|
||||
|
||||
- `pyopengl` (on Ubuntu/Debian, `sudo apt-get install python-opengl`)
|
||||
- `pygame` (on Ubuntu/Debian, `sudo apt-get install python-pygame`)
|
||||
372
thirdparty/assimp/port/PyAssimp/scripts/fixed_pipeline_3d_viewer.py
vendored
Executable file
372
thirdparty/assimp/port/PyAssimp/scripts/fixed_pipeline_3d_viewer.py
vendored
Executable file
@@ -0,0 +1,372 @@
|
||||
#!/usr/bin/env python
|
||||
#-*- coding: UTF-8 -*-
|
||||
|
||||
""" This program demonstrates the use of pyassimp to load and
|
||||
render objects with OpenGL.
|
||||
|
||||
'c' cycles between cameras (if any available)
|
||||
'q' to quit
|
||||
|
||||
This example mixes 'old' OpenGL fixed-function pipeline with
|
||||
Vertex Buffer Objects.
|
||||
|
||||
Materials are supported but textures are currently ignored.
|
||||
|
||||
For a more advanced example (with shaders + keyboard/mouse
|
||||
controls), check scripts/sdl_viewer.py
|
||||
|
||||
Author: Séverin Lemaignan, 2012
|
||||
|
||||
This sample is based on several sources, including:
|
||||
- http://www.lighthouse3d.com/tutorials
|
||||
- http://www.songho.ca/opengl/gl_transform.html
|
||||
- http://code.activestate.com/recipes/325391/
|
||||
- ASSIMP's C++ SimpleOpenGL viewer
|
||||
"""
|
||||
|
||||
import sys
|
||||
from OpenGL.GLUT import *
|
||||
from OpenGL.GLU import *
|
||||
from OpenGL.GL import *
|
||||
|
||||
import logging
|
||||
logger = logging.getLogger("pyassimp_opengl")
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
import math
|
||||
import numpy
|
||||
|
||||
import pyassimp
|
||||
from pyassimp.postprocess import *
|
||||
from pyassimp.helper import *
|
||||
|
||||
|
||||
name = 'pyassimp OpenGL viewer'
|
||||
height = 600
|
||||
width = 900
|
||||
|
||||
class GLRenderer():
|
||||
def __init__(self):
|
||||
|
||||
self.scene = None
|
||||
|
||||
self.using_fixed_cam = False
|
||||
self.current_cam_index = 0
|
||||
|
||||
# store the global scene rotation
|
||||
self.angle = 0.
|
||||
|
||||
# for FPS calculation
|
||||
self.prev_time = 0
|
||||
self.prev_fps_time = 0
|
||||
self.frames = 0
|
||||
|
||||
def prepare_gl_buffers(self, mesh):
|
||||
""" Creates 3 buffer objets for each mesh,
|
||||
to store the vertices, the normals, and the faces
|
||||
indices.
|
||||
"""
|
||||
|
||||
mesh.gl = {}
|
||||
|
||||
# Fill the buffer for vertex positions
|
||||
mesh.gl["vertices"] = glGenBuffers(1)
|
||||
glBindBuffer(GL_ARRAY_BUFFER, mesh.gl["vertices"])
|
||||
glBufferData(GL_ARRAY_BUFFER,
|
||||
mesh.vertices,
|
||||
GL_STATIC_DRAW)
|
||||
|
||||
# Fill the buffer for normals
|
||||
mesh.gl["normals"] = glGenBuffers(1)
|
||||
glBindBuffer(GL_ARRAY_BUFFER, mesh.gl["normals"])
|
||||
glBufferData(GL_ARRAY_BUFFER,
|
||||
mesh.normals,
|
||||
GL_STATIC_DRAW)
|
||||
|
||||
|
||||
# Fill the buffer for vertex positions
|
||||
mesh.gl["triangles"] = glGenBuffers(1)
|
||||
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mesh.gl["triangles"])
|
||||
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
|
||||
mesh.faces,
|
||||
GL_STATIC_DRAW)
|
||||
|
||||
# Unbind buffers
|
||||
glBindBuffer(GL_ARRAY_BUFFER,0)
|
||||
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0)
|
||||
|
||||
def load_model(self, path, postprocess = None):
|
||||
logger.info("Loading model:" + path + "...")
|
||||
|
||||
if postprocess:
|
||||
self.scene = pyassimp.load(path, processing=postprocess)
|
||||
else:
|
||||
self.scene = pyassimp.load(path)
|
||||
logger.info("Done.")
|
||||
|
||||
scene = self.scene
|
||||
#log some statistics
|
||||
logger.info(" meshes: %d" % len(scene.meshes))
|
||||
logger.info(" total faces: %d" % sum([len(mesh.faces) for mesh in scene.meshes]))
|
||||
logger.info(" materials: %d" % len(scene.materials))
|
||||
self.bb_min, self.bb_max = get_bounding_box(self.scene)
|
||||
logger.info(" bounding box:" + str(self.bb_min) + " - " + str(self.bb_max))
|
||||
|
||||
self.scene_center = [(a + b) / 2. for a, b in zip(self.bb_min, self.bb_max)]
|
||||
|
||||
for index, mesh in enumerate(scene.meshes):
|
||||
self.prepare_gl_buffers(mesh)
|
||||
|
||||
# Finally release the model
|
||||
pyassimp.release(scene)
|
||||
|
||||
def cycle_cameras(self):
|
||||
self.current_cam_index
|
||||
if not self.scene.cameras:
|
||||
return None
|
||||
self.current_cam_index = (self.current_cam_index + 1) % len(self.scene.cameras)
|
||||
cam = self.scene.cameras[self.current_cam_index]
|
||||
logger.info("Switched to camera " + str(cam))
|
||||
return cam
|
||||
|
||||
def set_default_camera(self):
|
||||
|
||||
if not self.using_fixed_cam:
|
||||
glLoadIdentity()
|
||||
|
||||
gluLookAt(0.,0.,3.,
|
||||
0.,0.,-5.,
|
||||
0.,1.,0.)
|
||||
|
||||
|
||||
|
||||
def set_camera(self, camera):
|
||||
|
||||
if not camera:
|
||||
return
|
||||
|
||||
self.using_fixed_cam = True
|
||||
|
||||
znear = camera.clipplanenear
|
||||
zfar = camera.clipplanefar
|
||||
aspect = camera.aspect
|
||||
fov = camera.horizontalfov
|
||||
|
||||
glMatrixMode(GL_PROJECTION)
|
||||
glLoadIdentity()
|
||||
|
||||
# Compute gl frustrum
|
||||
tangent = math.tan(fov/2.)
|
||||
h = znear * tangent
|
||||
w = h * aspect
|
||||
|
||||
# params: left, right, bottom, top, near, far
|
||||
glFrustum(-w, w, -h, h, znear, zfar)
|
||||
# equivalent to:
|
||||
#gluPerspective(fov * 180/math.pi, aspect, znear, zfar)
|
||||
|
||||
glMatrixMode(GL_MODELVIEW)
|
||||
glLoadIdentity()
|
||||
|
||||
cam = transform(camera.position, camera.transformation)
|
||||
at = transform(camera.lookat, camera.transformation)
|
||||
gluLookAt(cam[0], cam[2], -cam[1],
|
||||
at[0], at[2], -at[1],
|
||||
0, 1, 0)
|
||||
|
||||
def fit_scene(self, restore = False):
|
||||
""" Compute a scale factor and a translation to fit and center
|
||||
the whole geometry on the screen.
|
||||
"""
|
||||
|
||||
x_max = self.bb_max[0] - self.bb_min[0]
|
||||
y_max = self.bb_max[1] - self.bb_min[1]
|
||||
tmp = max(x_max, y_max)
|
||||
z_max = self.bb_max[2] - self.bb_min[2]
|
||||
tmp = max(z_max, tmp)
|
||||
|
||||
if not restore:
|
||||
tmp = 1. / tmp
|
||||
|
||||
logger.info("Scaling the scene by %.03f" % tmp)
|
||||
glScalef(tmp, tmp, tmp)
|
||||
|
||||
# center the model
|
||||
direction = -1 if not restore else 1
|
||||
glTranslatef( direction * self.scene_center[0],
|
||||
direction * self.scene_center[1],
|
||||
direction * self.scene_center[2] )
|
||||
|
||||
return x_max, y_max, z_max
|
||||
|
||||
def apply_material(self, mat):
|
||||
""" Apply an OpenGL, using one OpenGL display list per material to cache
|
||||
the operation.
|
||||
"""
|
||||
|
||||
if not hasattr(mat, "gl_mat"): # evaluate once the mat properties, and cache the values in a glDisplayList.
|
||||
diffuse = numpy.array(mat.properties.get("diffuse", [0.8, 0.8, 0.8, 1.0]))
|
||||
specular = numpy.array(mat.properties.get("specular", [0., 0., 0., 1.0]))
|
||||
ambient = numpy.array(mat.properties.get("ambient", [0.2, 0.2, 0.2, 1.0]))
|
||||
emissive = numpy.array(mat.properties.get("emissive", [0., 0., 0., 1.0]))
|
||||
shininess = min(mat.properties.get("shininess", 1.0), 128)
|
||||
wireframe = mat.properties.get("wireframe", 0)
|
||||
twosided = mat.properties.get("twosided", 1)
|
||||
|
||||
setattr(mat, "gl_mat", glGenLists(1))
|
||||
glNewList(mat.gl_mat, GL_COMPILE)
|
||||
|
||||
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse)
|
||||
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular)
|
||||
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, ambient)
|
||||
glMaterialfv(GL_FRONT_AND_BACK, GL_EMISSION, emissive)
|
||||
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, shininess)
|
||||
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE if wireframe else GL_FILL)
|
||||
glDisable(GL_CULL_FACE) if twosided else glEnable(GL_CULL_FACE)
|
||||
|
||||
glEndList()
|
||||
|
||||
glCallList(mat.gl_mat)
|
||||
|
||||
|
||||
|
||||
def do_motion(self):
|
||||
|
||||
gl_time = glutGet(GLUT_ELAPSED_TIME)
|
||||
|
||||
self.angle = (gl_time - self.prev_time) * 0.1
|
||||
|
||||
self.prev_time = gl_time
|
||||
|
||||
# Compute FPS
|
||||
self.frames += 1
|
||||
if gl_time - self.prev_fps_time >= 1000:
|
||||
current_fps = self.frames * 1000 / (gl_time - self.prev_fps_time)
|
||||
logger.info('%.0f fps' % current_fps)
|
||||
self.frames = 0
|
||||
self.prev_fps_time = gl_time
|
||||
|
||||
glutPostRedisplay()
|
||||
|
||||
def recursive_render(self, node):
|
||||
""" Main recursive rendering method.
|
||||
"""
|
||||
|
||||
# save model matrix and apply node transformation
|
||||
glPushMatrix()
|
||||
m = node.transformation.transpose() # OpenGL row major
|
||||
glMultMatrixf(m)
|
||||
|
||||
for mesh in node.meshes:
|
||||
self.apply_material(mesh.material)
|
||||
|
||||
glBindBuffer(GL_ARRAY_BUFFER, mesh.gl["vertices"])
|
||||
glEnableClientState(GL_VERTEX_ARRAY)
|
||||
glVertexPointer(3, GL_FLOAT, 0, None)
|
||||
|
||||
glBindBuffer(GL_ARRAY_BUFFER, mesh.gl["normals"])
|
||||
glEnableClientState(GL_NORMAL_ARRAY)
|
||||
glNormalPointer(GL_FLOAT, 0, None)
|
||||
|
||||
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mesh.gl["triangles"])
|
||||
glDrawElements(GL_TRIANGLES,len(mesh.faces) * 3, GL_UNSIGNED_INT, None)
|
||||
|
||||
glDisableClientState(GL_VERTEX_ARRAY)
|
||||
glDisableClientState(GL_NORMAL_ARRAY)
|
||||
|
||||
glBindBuffer(GL_ARRAY_BUFFER, 0)
|
||||
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
|
||||
|
||||
for child in node.children:
|
||||
self.recursive_render(child)
|
||||
|
||||
glPopMatrix()
|
||||
|
||||
|
||||
def display(self):
|
||||
""" GLUT callback to redraw OpenGL surface
|
||||
"""
|
||||
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
|
||||
|
||||
glRotatef(self.angle,0.,1.,0.)
|
||||
self.recursive_render(self.scene.rootnode)
|
||||
|
||||
glutSwapBuffers()
|
||||
self.do_motion()
|
||||
return
|
||||
|
||||
####################################################################
|
||||
## GLUT keyboard and mouse callbacks ##
|
||||
####################################################################
|
||||
def onkeypress(self, key, x, y):
|
||||
if key == 'c':
|
||||
self.fit_scene(restore = True)
|
||||
self.set_camera(self.cycle_cameras())
|
||||
if key == 'q':
|
||||
sys.exit(0)
|
||||
|
||||
def render(self, filename=None, fullscreen = False, autofit = True, postprocess = None):
|
||||
"""
|
||||
|
||||
:param autofit: if true, scale the scene to fit the whole geometry
|
||||
in the viewport.
|
||||
"""
|
||||
|
||||
# First initialize the openGL context
|
||||
glutInit(sys.argv)
|
||||
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH)
|
||||
if not fullscreen:
|
||||
glutInitWindowSize(width, height)
|
||||
glutCreateWindow(name)
|
||||
else:
|
||||
glutGameModeString("1024x768")
|
||||
if glutGameModeGet(GLUT_GAME_MODE_POSSIBLE):
|
||||
glutEnterGameMode()
|
||||
else:
|
||||
print("Fullscreen mode not available!")
|
||||
sys.exit(1)
|
||||
|
||||
self.load_model(filename, postprocess = postprocess)
|
||||
|
||||
|
||||
glClearColor(0.1,0.1,0.1,1.)
|
||||
#glShadeModel(GL_SMOOTH)
|
||||
|
||||
glEnable(GL_LIGHTING)
|
||||
|
||||
glEnable(GL_CULL_FACE)
|
||||
glEnable(GL_DEPTH_TEST)
|
||||
|
||||
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE)
|
||||
glEnable(GL_NORMALIZE)
|
||||
glEnable(GL_LIGHT0)
|
||||
|
||||
glutDisplayFunc(self.display)
|
||||
|
||||
|
||||
glMatrixMode(GL_PROJECTION)
|
||||
glLoadIdentity()
|
||||
gluPerspective(35.0, width/float(height) , 0.10, 100.0)
|
||||
glMatrixMode(GL_MODELVIEW)
|
||||
self.set_default_camera()
|
||||
|
||||
if autofit:
|
||||
# scale the whole asset to fit into our view frustum·
|
||||
self.fit_scene()
|
||||
|
||||
glPushMatrix()
|
||||
|
||||
glutKeyboardFunc(self.onkeypress)
|
||||
glutIgnoreKeyRepeat(1)
|
||||
|
||||
glutMainLoop()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if not len(sys.argv) > 1:
|
||||
print("Usage: " + __file__ + " <model>")
|
||||
sys.exit(0)
|
||||
|
||||
glrender = GLRenderer()
|
||||
glrender.render(sys.argv[1], fullscreen = False, postprocess = aiProcessPreset_TargetRealtime_MaxQuality)
|
||||
|
||||
53
thirdparty/assimp/port/PyAssimp/scripts/quicktest.py
vendored
Executable file
53
thirdparty/assimp/port/PyAssimp/scripts/quicktest.py
vendored
Executable file
@@ -0,0 +1,53 @@
|
||||
#!/usr/bin/env python
|
||||
#-*- coding: UTF-8 -*-
|
||||
|
||||
"""
|
||||
This module uses the sample.py script to load all test models it finds.
|
||||
|
||||
Note: this is not an exhaustive test suite, it does not check the
|
||||
data structures in detail. It just verifies whether basic
|
||||
loading and querying of 3d models using pyassimp works.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Make the development (ie. GIT repo) version of PyAssimp available for import.
|
||||
sys.path.insert(0, '..')
|
||||
|
||||
import sample
|
||||
from pyassimp import errors
|
||||
|
||||
# Paths to model files.
|
||||
basepaths = [os.path.join('..', '..', '..', 'test', 'models'),
|
||||
os.path.join('..', '..', '..', 'test', 'models-nonbsd')]
|
||||
|
||||
# Valid extensions for 3D model files.
|
||||
extensions = ['.3ds', '.x', '.lwo', '.obj', '.md5mesh', '.dxf', '.ply', '.stl',
|
||||
'.dae', '.md5anim', '.lws', '.irrmesh', '.nff', '.off', '.blend']
|
||||
|
||||
|
||||
def run_tests():
|
||||
ok, err = 0, 0
|
||||
for path in basepaths:
|
||||
print("Looking for models in %s..." % path)
|
||||
for root, dirs, files in os.walk(path):
|
||||
for afile in files:
|
||||
base, ext = os.path.splitext(afile)
|
||||
if ext in extensions:
|
||||
try:
|
||||
sample.main(os.path.join(root, afile))
|
||||
ok += 1
|
||||
except errors.AssimpError as error:
|
||||
# Assimp error is fine; this is a controlled case.
|
||||
print(error)
|
||||
err += 1
|
||||
except Exception:
|
||||
print("Error encountered while loading <%s>"
|
||||
% os.path.join(root, afile))
|
||||
print('** Loaded %s models, got controlled errors for %s files'
|
||||
% (ok, err))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
run_tests()
|
||||
89
thirdparty/assimp/port/PyAssimp/scripts/sample.py
vendored
Executable file
89
thirdparty/assimp/port/PyAssimp/scripts/sample.py
vendored
Executable file
@@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env python
|
||||
#-*- coding: UTF-8 -*-
|
||||
|
||||
"""
|
||||
This module demonstrates the functionality of PyAssimp.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
import pyassimp
|
||||
import pyassimp.postprocess
|
||||
|
||||
def recur_node(node,level = 0):
|
||||
print(" " + "\t" * level + "- " + str(node))
|
||||
for child in node.children:
|
||||
recur_node(child, level + 1)
|
||||
|
||||
|
||||
def main(filename=None):
|
||||
|
||||
scene = pyassimp.load(filename, processing=pyassimp.postprocess.aiProcess_Triangulate)
|
||||
|
||||
#the model we load
|
||||
print("MODEL:" + filename)
|
||||
print
|
||||
|
||||
#write some statistics
|
||||
print("SCENE:")
|
||||
print(" meshes:" + str(len(scene.meshes)))
|
||||
print(" materials:" + str(len(scene.materials)))
|
||||
print(" textures:" + str(len(scene.textures)))
|
||||
print
|
||||
|
||||
print("NODES:")
|
||||
recur_node(scene.rootnode)
|
||||
|
||||
print
|
||||
print("MESHES:")
|
||||
for index, mesh in enumerate(scene.meshes):
|
||||
print(" MESH" + str(index+1))
|
||||
print(" material id:" + str(mesh.materialindex+1))
|
||||
print(" vertices:" + str(len(mesh.vertices)))
|
||||
print(" first 3 verts:\n" + str(mesh.vertices[:3]))
|
||||
if mesh.normals.any():
|
||||
print(" first 3 normals:\n" + str(mesh.normals[:3]))
|
||||
else:
|
||||
print(" no normals")
|
||||
print(" colors:" + str(len(mesh.colors)))
|
||||
tcs = mesh.texturecoords
|
||||
if tcs.any():
|
||||
for tc_index, tc in enumerate(tcs):
|
||||
print(" texture-coords "+ str(tc_index) + ":" + str(len(tcs[tc_index])) + "first3:" + str(tcs[tc_index][:3]))
|
||||
|
||||
else:
|
||||
print(" no texture coordinates")
|
||||
print(" uv-component-count:" + str(len(mesh.numuvcomponents)))
|
||||
print(" faces:" + str(len(mesh.faces)) + " -> first:\n" + str(mesh.faces[:3]))
|
||||
print(" bones:" + str(len(mesh.bones)) + " -> first:" + str([str(b) for b in mesh.bones[:3]]))
|
||||
print
|
||||
|
||||
print("MATERIALS:")
|
||||
for index, material in enumerate(scene.materials):
|
||||
print(" MATERIAL (id:" + str(index+1) + ")")
|
||||
for key, value in material.properties.items():
|
||||
print(" %s: %s" % (key, value))
|
||||
print
|
||||
|
||||
print("TEXTURES:")
|
||||
for index, texture in enumerate(scene.textures):
|
||||
print(" TEXTURE" + str(index+1))
|
||||
print(" width:" + str(texture.width))
|
||||
print(" height:" + str(texture.height))
|
||||
print(" hint:" + str(texture.achformathint))
|
||||
print(" data (size):" + str(len(texture.data)))
|
||||
|
||||
# Finally release the model
|
||||
pyassimp.release(scene)
|
||||
|
||||
def usage():
|
||||
print("Usage: sample.py <3d model>")
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
usage()
|
||||
else:
|
||||
main(sys.argv[1])
|
||||
1705
thirdparty/assimp/port/PyAssimp/scripts/transformations.py
vendored
Normal file
1705
thirdparty/assimp/port/PyAssimp/scripts/transformations.py
vendored
Normal file
File diff suppressed because it is too large
Load Diff
26
thirdparty/assimp/port/PyAssimp/setup.py
vendored
Normal file
26
thirdparty/assimp/port/PyAssimp/setup.py
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
from distutils.core import setup
|
||||
|
||||
def readme():
|
||||
with open('README.rst') as f:
|
||||
return f.read()
|
||||
|
||||
setup(name='pyassimp',
|
||||
version='4.1.4',
|
||||
license='ISC',
|
||||
description='Python bindings for the Open Asset Import Library (ASSIMP)',
|
||||
long_description=readme(),
|
||||
url='https://github.com/assimp/assimp',
|
||||
author='ASSIMP developers',
|
||||
author_email='assimp-discussions@lists.sourceforge.net',
|
||||
maintainer='Séverin Lemaignan',
|
||||
maintainer_email='severin@guakamole.org',
|
||||
packages=['pyassimp'],
|
||||
data_files=[
|
||||
('share/pyassimp', ['README.rst']),
|
||||
('share/examples/pyassimp', ['scripts/' + f for f in os.listdir('scripts/')])
|
||||
],
|
||||
requires=['numpy']
|
||||
)
|
||||
Reference in New Issue
Block a user