Summary of Contents for Daheng Imaging CoaXPress MARS-6500-31X2C-TF
Page 1
China Daheng Group, Inc. Beijing Image Vision Technology Branch MARS CoaXPress Cameras User Manual Version: V1.0.3 Date: 2024-10-21...
Page 2
All rights reserved. No parts of this manual may be used or reproduced, in any forms or by any means, without prior written permission of China Daheng Group, Inc. Beijing Image Vision Technology Branch. The right is also reserved to modify or change any parts of this manual in the future without prior notification. All other trademarks are the properties of their respective owners.
Page 3
Preface We really appreciate your choosing of DAHENG IMAGING products. The MARS CoaXPress camera is DAHENG IMAGING's latest area scan industrial digital camera with large and high-quality sensor, featuring high resolution, high definition and extremely low noise. The camera is equipped with standard CoaXPress interface for high transmission speed and high transmission stability.
Page 6
8.3.2. Sensor Bit Depth ....................... 67 8.3.3. PGA Gain ........................... 67 8.3.4. Pixel Format ........................68 8.3.5. ROI............................. 73 8.3.6. Auto Exposure/Auto Gain ....................73 8.3.7. Test Pattern ........................75 8.3.8. User Set Control ........................ 77 8.3.9. Device User ID ........................79 8.3.10.
Page 7
8.6. Sequencer ......................108 8.6.1. GUI........................... 108 8.6.2. User Guide ........................109 8.7. Fan and TEC Control ..................110 8.7.1. GUI........................... 110 8.7.2. Precautions ........................111 9. Software Tool ......................112 9.1. LUT Create Tool ....................112 9.1.1. GUI........................... 112 9.1.2.
1. Introduction 1.1. Series Introduction The MARS CoaXPress camera is DAHENG IMAGING's latest industrial digital camera, featuring high quality images, low power, high transmission speed, stable operating capability, and rapid cooling characteristics. The cameras are available in a variety of resolutions and frame rates, and are available with multiple cooling options: Thermoelectric Cooling (TEC) with a fan, fan and passive cooling.
2. Precautions 2.1. Safety Claim Before installing and using DAHENG IMAGING products, please carefully read this manual and strictly comply with the usage requirements. And ensure to use the product in specified conditions, otherwise it may cause equipment malfunction. Our company will not bear any legal responsibility for any damage or injury caused by improper use of this product and disregard of safety instructions.
To avoid collecting dust in the optical filter, always keep the plastic cap on cameras when no lens is mounted. Select the CXP frame grabber that matches the camera frame rates, such as DAHENG IMAGING, Matrox, Euresys frame grabber. Multi-channel cameras should use 75Ω coaxial cable certificated by CoaXPress IF.
The M5 screw assembly torque ≤ 6N· m, and the M4 screw assembly torque ≤ 5N· m. If the screw assembly torque is too large, it may cause the camera thread stripping. 2.6. Certification and Declaration 1) CE, RoHS We declare that DAHENG IMAGING MARS CoaXPress digital cameras have passed the following EU certifications: 2014/30/EU—Electromagnetic Compatibility Restriction ...
This interface is developed according to the standard of general transport layer in GEN<i>CAM standard, DAHENG IMAGING follows the GEN<i>CAM standard and provides the GenTL interface for the user, and the user can use the GenTL interface directly to develop their own control program.
3.Installation GenTL: a generic Transport Layer Interface, between software drivers and libraries, that transports the image data from the camera to the application running on a PC SFNC: common naming convention for camera features, which promotes interoperability between products from different manufacturers Figure 3-1 GEN<i>CAM standard schematic diagram 3.2.
3.Installation 3.3. Camera Connection Make sure that you have installed a CoaXPress 2.0 frame grabber in your computer including related software. Then you can prepare to configure a link between a camera and CXP-12 Frame Grabber by using four coaxial-cables. To connect the camera to your computer, follow the steps below: Plug one end of a coaxial-cable into the CH1 of the CXP connector on the camera and the other end of the coax cable into the CH1 of the CXP-12 frame gabber in your computer.
Page 15
3.Installation After open the TL file, GalaxyView will automatically load the currently selected .cti file. Figure 3-3 Euresys frame gabber add TL file The name of the current frame gabber will be displayed under "Load GenTL". Double-click the name of the current frame gabber to load the CXP camera connected to the frame gabber. To use the CXP camera, user need to load the TL file corresponding to the CXP frame gabber.
4.General Specification 4. General Specification 4.1. Explanation of Important Parameters 4.1.1. About Spectral Response QE: Quantum efficiency, which is the ratio of the average number of photoelectrons produced per unit time to the number of incident photons at a given wavelength. Sensitivity: The change of the sensor output signal relative to the incident light energy.
Page 17
4.General Specification Synchronization Hardware trigger, software trigger 1 input and 1 output with opto-isolated, 1 programmable GPIOs, 1 RS232 Operating Temp. 0°C~45°C Storage Temp. -20°C~70°C Operating Humidity 10%~80% Cooling Method Thermoelectric Cooling (TEC) with a fan Typ.: 15°C±0.5°C below ambient temp.@room temp. Cooling Temp.
Page 18
4.General Specification Note: MARS-6501-31X2M/C-TF is the Grade2 sensor, MARS-6500-31X2M/C-TF is the Grade1 sensor. The only difference between the two cameras is the grade of the sensor. The difference between Grade1 and Grade2 sensors defined by sensor manufacturers is: Grade1 have no consecutive defect pixel cluster, and Grade2 may have up to 12 consecutive defect pixel cluster.
Page 24
4.General Specification Synchronization Hardware trigger, software trigger 1 input and 1 output with opto-isolated, 1 programmable GPIOs, 1 RS232 Operating Temp. 0°C~45°C Storage Temp. -20°C~70°C Operating Humidity 10%~80% Cooling Method 16W@24V, ambient temp. 25°C, FAN (ON) Power Consumption 14W@24V, ambient temp. 25°C, FAN (OFF) Lens Mount M58, F Data Interface...
Page 25
4.General Specification Color camera Examples 1: Cluster with 4 consecutive defect pixels within the same Bayer color plane in a row is not allowed. Figure 4-9 MARS-2101-230X2C-NF clusters within same Bayer color plane distribution diagram Examples 2: When different Bayer color plane combined, maximum cluster size is 8 in any given 5x5 pixel array.
4.General Specification Operating System Windows Win7/Win8/Win10/Win11 Programmable Image size, gain, exposure time, trigger polarity, flash polarity Control Conformity CE, RoHS, CoaXPress2.0, GenICam Table 4-6 MARS-2625-150X2M/C(-NF) \ MARS-2626-150X2M/C(-NF) camera specifications Note: MARS-2626-150X2M/C(-NF) is the Grade2 sensor, MARS-2625-150X2M/C(-NF) is the Grade1 sensor. MARS-2626-150X2M/C(-NF) clusters distribution diagram is same as MARS-2101-230X2M/C-NF clusters distribution diagram, see details in 4.2.5.
Page 28
4.General Specification Mono/Color Color Mono Color Mono Bayer GB8 Bayer GB8 Pixel Formats Mono8/Mono10 Mono8/Mono10 Bayer GB10 Bayer GB10 Signal Noise 40.2dB 40.4dB 40.2dB 40.4dB Ratio Exposure 13μs~1s, Actual Steps: 1μs Time Gain 0dB~16dB, Default: 0dB, Steps: 0.1dB Binning 1×1, 1×2, 1×4, 2×1, 2×2, 2×4, 4×1, 4×2, 4×4 Decimation Horizontal FPGA, Vertical Sensor: 1×1, 1×2, 1×4, 2×1, 2×2, 2×4, 4×1, 4×2, 4×4 Synchronizati...
5.Dimensions 5. Dimensions 5.1. Camera Dimensions The corresponding mechanical dimensions for each model of the MARS-CXP camera are shown in the table below. Model Lens Mount Cooling Mechanical Dimensions TEC + Fan MARS-6500-31X2M/C-TF MARS-6501-31X2M/C-TF TEC + Fan TEC + Fan MARS-10300-24X2M/C-TF MARS-15200-16X2M/C-TF TEC + Fan...
5.Dimensions 5.2. Optical Interface The back-flange distance and maximum lens allowed thread length for each model are shown in the table below. (If other Lens mount are required, please contact sales or technical support for information). Lens Back-flange Maximum lens Optical interface Model mount...
5.Dimensions Figure 5-18 Optical interface E The color models are equipped with an IR filter and the cut-off frequency is 700nm. The mono models are equipped with a transparent glass. Remove IR-filter or transparent glass will defocus the image plane. Contact our technical support when the glass needed to be removed.
Page 40
5.Dimensions MARS-24600-12X2M/C-TF front mounting holes: Tripod adapter Spring washer Screwing length of Screw specification step thickness thickness camera screw thread M5*8 hexagon socket head cap screw 2 mm 1.1 mm 4.9 mm M5*10 hexagon socket head cap screw 4 mm 1.1 mm 4.9 mm M5*12 hexagon socket head cap screw...
7.Electrical Interface 7. Electrical Interface 7.1. LED Light Four LED lights are set on the back cover of camera which indicates camera's status, as shown in Table 7-1. LED light can display 3 colors: red, yellow and green. LED status Camera status The camera is powered off / The connection is not enabled...
7.Electrical Interface 7.3. I/O Port 7.3.1. I/O Connector Pin Definition Camera I/O port is implemented by Hirose 12-pin receptacle (No. HR10A-10R-12PB(71)), and the corresponding cable plug is HR10A-10P-12S(73). Diagram Definition Core Color Description Line0+ Green Opto-isolated input + Blue PWR GND & GPIO GND Line0- Grey Opto-isolated input -...
Page 44
7.Electrical Interface Logic 0 input voltage: 0V~+2.5V (Line0+ voltage) Logic 1 input voltage: +5V~+24V (Line0+ voltage) Minimum input current: 7mA The status is unstable when input voltage is between 2.5V and 5V, which should be avoided ...
7.Electrical Interface Rising edge delay: <50μs (0°C~45°C), parameter description as shown in Figure 7-5 Falling edge delay: <50μs (0°C~45°C), parameter description as shown in Figure 7-5 Different environment temperature and input voltage have influence on delay time of opto-isolated input circuit.
Page 46
7.Electrical Interface Range of external voltage (EXVCC) is 5~24V Maximum output current of Line1 is 25mA Transistor voltage drop and output current of opto-isolated output circuit in typical application environment (temperature is 25°C) is as shown in Table 7-5 External voltage External resistance Transistor voltage drop...
7.Electrical Interface Delay time (td): the time required from 50% rising of OUTPUT1 to the decrease to 90% of the maximum value of LINE1+ Falling time (tf): the time taken for the amplitude of LINE1+ to decrease from 90% to 10% of the maximum value ...
Page 48
7.Electrical Interface Logic 0 input voltage: 0V~+0.6V (Line2 voltage) Logic 1 input voltage: +1.9V~+24V (Line2 voltage) The status is unstable when input voltage is between 0.6V and 1.9V, which should be avoided When input of Line2/3 is high, input current is lower than 100μA. When input of Line2/3 is low, input current is lower than -1mA.
7.Electrical Interface Input rising time delay: <2μs (0°C~45°C), parameter description as shown in Figure 7-12 Input falling time delay: <2μs (0°C~45°C), parameter description as shown in Figure 7-12 Figure 7-12 Parameter of Line2 input circuit 7.3.2.3.2. Line2 is Configured as Output ...
7.Electrical Interface Delay time (td): the time required from 50% rising of OUTPUT2 to the decrease to 90% of the maximum value of Line2 Falling time (tf): the time taken for the amplitude of Line2 to decrease from 90% to 10% of the maximum value ...
8.Features 8. Features MARS CoaXPress camera support a variety of standard and advanced functions. The function support of different models varies slightly. Please refer to the DAHENG Cameras Feature List for details. 8.1. I/O Control 8.1.1. Input Mode Operation Configuring Line as input The camera has two input signals: Line0 and Line2.
8.Features Example 1: Setting the trigger delay value to 1000ms, and the trigger signal will be valid after 1000ms delay, as shown in Figure 8-2. Figure 8-2 Trigger delay schematic diagram Input Inverter The signal level of input lines is configurable for the camera. The user can select whether the input level is reverse or not by setting "LineInverter".
Page 53
8.Features Each output source of the two output lines is configurable, and the output source includes: Strobe, UserOutput0, UserOutput1, UserOutput2, ExposureActive, FrameTriggerWait, AcquisitionTriggerWait, Timer1Active. The default output source of the camera is UserOutput0 when the camera is powered on. What status (high or low level) of the output signal is valid depends on the specific external circuit. The following signal diagrams are described as examples of active low.
Page 54
8.Features Signal1 Signal2 Signal3 Trigger Signal Exposure Exposure Exposure time time ExposureActive time Signal Figure 8-5 Global shutter mode "ExposureActive" signal schematic diagram Trigger Signal Exposure line by line Actual exposure time for all lines ExposureActive Figure 8-6 Electronic rolling shutter mode "ExposureActive" signal schematic diagram Trigger Signal All lines are exposing at the same time after data reset Actual exposure time for all lines...
Page 55
8.Features TriggerWait The "TriggerWait" signal can be used to optimize the acquisition of the trigger image and to avoid excessive triggering. It is recommended to use the "TriggerWait" signal only when the camera is configured for hardware trigger. For software trigger, please use the "AcquisitionStatus". When the camera is ready to receive a trigger signal of the corresponding trigger mode, the "TriggerWait"...
8.Features Setting the user-defined status for the output lines The camera can select the user-defined output by setting "LineSource", by setting "UserOutputValue" to configure the output signal. By setting "UserOutputSelector" to select UserOutput0, UserOutput1 or UserOutput2. By setting "UserOutputValue" to set the user-defined output value, and the default value is false when the camera is powered on.
8.Features 8.2. Image Acquisition Control 8.2.1. Acquisition Start and Stop 8.2.1.1. Acquisition Start It can send AcquisitionStart command immediately after opening the camera. The acquisition process in continuous mode is illustrated in Figure 8-11, and the acquisition process in trigger mode is illustrated in Figure 8-12.
8.Features In trigger mode, sending AcquisitionStart command is not enough, a trigger signal is also needed. Each time a frame trigger is applied (including software trigger and hardware trigger), the camera will acquire and transmit a frame of image. 8.2.1.2. Acquisition Stop It can send AcquisitionStop command to camera at any time.
8.Features After the camera transferred a whole frame, the camera goes into exposure state. When user sends an AcquisitionStop command during exposing, the camera will immediately stop currently exposure state and return to stop acquisition state after finish the readout of incomplete frame. The camera will not send exception frames to users.
8.Features You can check if the camera is in the waiting trigger status by the camera's trigger wait signal or by using the acquisition status function. 8.2.3. Trigger Type Selection Two camera trigger types are available: FrameStart and FrameBurstStart. Different trigger types correspond to their respective set of trigger configurations, including trigger mode, trigger delay, trigger source, trigger polarity, and software trigger commands.
8.Features For example, the FrameStart trigger mode and the FrameBurstStart trigger mode are selected at the same time. If the "Acquisition burst frame count" parameter is set to 3, when the camera receives a FrameBurstStart trigger signal, no image will be acquired. When the FrameStart trigger signal is received, the camera will acquire1 image, each time a FrameStart trigger signal is received, the camera will acquire 1 image.
8.Features Set the Trigger Mode to ON. Set the Trigger Source to Software. Send Software Trigger command. All the software trigger commands are sent by the host through the CoaXPress protocol, to trigger the camera to acquire and transmit images. ...
8.Features The camera's trigger source Line0 uses opto-isolated circuit to isolate signal. Its internal circuit delay trigger signal and rising edge's delay time is less than falling edge's. There are a dozen clock cycles delay of rising edge and dozens clock cycles delay of falling edge. If you use Line0 to trigger the camera, the positive pulse signal's positive width will be wider (about 20μs~40μs) and the negative pulse signal's negative width will be narrower (about 20μs~40μs).
Page 65
8.Features Non-overlaping Non-overlaping Frame N Frame N+1 Frame N+2 Sensor Exposure Frame N Blank Blank Frame N+1 Sensor Readout Time Figure 8-20 The exposure sequence diagram in non-overlapping exposure mode Trigger acquisition mode If the interval between two triggers is greater than the sum of the exposure time and readout time, overlapping exposure will not occur, as shown in Figure 8-21.
8.Features Continuous acquisition mode If the exposure time is greater than the frame blanking time, the exposure time and the readout time will be overlapped. As shown in the Figure 8-22. Trigger acquisition mode When the interval between two triggers is less than the sum of exposure time and the readout time, overlapping exposure will occur, as shown in Figure 8-23.
Page 67
8.Features If falling edge triggering is enabled, exposure starts when the trigger signal falls and continue until the exposure time has expired, as shown in Figure 8-25 Figure 8-25 The sequence diagram in falling edge trigger of Timed exposure mode Avoid overtriggering in Timed exposure mode.
8.Features Prerequisites a) Set the TriggerMode parameter to On. b) Set the TriggerSource parameter to one of the available hardware trigger source, e.g., Line0. c) Set the ExposureMode parameter to TriggerWidth exposure mode. How it works The user can use overlapping image acquisition to increase the frame rate of the camera. With overlapping image acquisition, the exposure of a new image begins while the camera is still reading out the sensor data of the previous image.
Page 69
8.Features All lines transfer electrons from All lines are exposing at the same time exposure area to storage area Reading electrons from storage area line by line Exposure time Readout time Time Figure 8-28 Global shutter Electronic Rolling Shutter The implementation process of electronic rolling shutter is as shown in Figure 8-29, different from the global shutter, electronic rolling shutter exposures from the first line, and starts the second line exposure after a row period.
8.Features All lines are exposing at the same time after data reset Readout after exposure Exposing Exposure time Reading Readout time Time Figure 8-30 Global Reset Release shutter Line-by-line exposure sensor starts exposure at the same time in GRR mode, and the exposure ends successively from top to bottom.
8.Features Figure 8-31 UltraShort exposure time mode In UltraShort exposure time mode, the camera does not support automatic adjustment of the exposure time, only support manual adjustment of the exposure time. 8.2.10.4. Set Exposure Time The camera supports setting the exposure time, step: 1μs. The exposure precision of the camera is limited by the sensor, when the steps in the user's interface and the demo display as 1μs, actually the steps are one row period.
Page 72
8.Features Trigger Signal fix_delay Strobe Signal exposure_delay Exposure Stage Figure 8-32 The exposure delay sequence diagram in overlaping exposure mode When a trigger signal is received to the sensor to start exposure, there is a small delay, which is called the exposure delay and consists of five parts of time.
8.Features The following table shows the total exposure delay time for each sensor. T1 is calculated according to the typical delay (5μs) of Line0. If it is Line2 or CXPTrigger, T1 can be ignored. T2/3/4 is calculated as 0μs. T5 is calculated according to the ROI settings and features of each sensor. The exposure delay data for each model is as follows: Model Exposure delay (μs)
8.Features Figure 8-34 The cameras response curve 8.3.2. Sensor Bit Depth By setting the "Sensor Bit Depth", the user can change the bit depth of the sensor output data. Reducing the sensor bit depth improves the camera frame rate, and increasing the sensor bit depth improves the image quality.
8.Features [2.8x-5.2x] 0.4x [0.5x-2.8x] 0.1x MARS-15200-16X2M/C [0.5x-2.8x] 0.1x [2.8x-5.2x] 0.4x MARS-2625-150X2M/C [1x-2.5x] 0.5x [1x-2.5x] 0.5x MARS-2626-150X2M/C MARS-6502-71X2M/C [0.75x-1.25x] 0.25x [0.75x-1.25x] 0.25x MARS-6503-71X2M/C MARS-24600-12X2M/C [0x-16x] 0.1x [0x-16x] 0.1x MARS-513-940X2M/C, MARS-2100-230X2M/C and MARS-2101-230X2M/C cameras have only 4 gears of PGA gain due to the characteristics of the Sensor, and the setting value will take the parameter value corresponding to gear as the valid parameter.
Page 76
8.Features Mono8 Figure 8-35 Mono8 pixel format When the pixel format is set to Mono8, the brightness value of each pixel is 8 bits. The format in the memory is as follows: …… …… …… Among them Y00, Y01, Y02 … are the gray value of each pixel that starts from the first row of the image. Then the gray value of the second row pixels of the images is Y10, Y11, and Y12…...
Page 77
8.Features BayerRG8 Figure 8-36 BayerRG8 pixel format When the pixel format is set to BayerRG8, the value of each pixel in the output image of the camera is 8 bits. According to the location difference, the three components of red, green and blue are respectively represented.
Page 78
8.Features BayerGR8 Figure 8-37 BayerGR8 pixel format When the pixel format is set to BayerGR8, the value of each pixel in the output image of the camera is 8 bits. According to the location difference, the three components of red, green and blue are respectively represented.
Page 79
8.Features BayerGB8 Figure 8-38 BayerGB8 pixel format When the pixel format is set to BayerRG8, the value of each pixel in the output image of the camera is 8 bits. According to the location difference, the three components of red, green and blue are respectively represented.
8.Features 8.3.5. ROI By setting the ROI of the image, the camera can transmit the specific region of the image, and the output region's parameters include OffsetX, OffsetY, width and height of the output image. The camera only reads the image data from the sensor's designated region to the memory, and transfer it to the host, and the other regions' image of the sensor will be discarded.
Page 81
8.Features AAROI is defined by the following way: AAROIOffsetX: The offset of the X axis direction. AAROIOffsetY: The offset of the Y axis direction. AAROIWidth: The width of ROI. AAROIHeight: The height of ROI. Offset is the offset value that relative to the upper left corner of the image. The setting of the horizontal and vertical steps are respectively consistent with the horizontal and vertical steps of the ROI.
8.Features Auto Exposure/Auto Gain The auto gain can adjust the camera's gain automatically and the auto exposure can adjust the camera's exposure time automatically, so that the average gray value in AAROI is achieved to the expected gray value. The auto gain and auto exposure both can be controlled by "Once" and "Continuous" mode. When using the "Once"...
Page 83
8.Features SlantLineMoving In the moving diagonal gray gradient test image, the first pixel value of adjacent row in each frame increases by 1, until the last row. When the pixel gray value increases to 255, the next pixel gray value returns to 0.
8.Features 8.3.8. User Set Control By setting various parameters of the camera, the camera can perform the best performance in different environments. There are two ways to set parameters: one is to modify the parameters manually, and the other is to load parameter set. In order to save the specific parameters of the users, avoiding to set the parameters every time when you open the camera, the MARS camera provides a function to save the parameter set, which can easily save the parameters that the user use, including the control parameters that the camera needed.
Page 85
8.Features Save parameters (UserSetSave): Save the current effective configuration parameters to the user configuration parameters. The storage steps are as follows: Modify the camera's configuration parameters, until the camera runs to the user's requirements. Use UserSetSelector to select UserSet0. Execute UserSetSave command. The camera's configuration parameters which are saved in the user parameter set include: ...
8.Features StaticDefectCorrection FlatFieldCorrection, FFCCoefficient DSNUControl PRNUControl ColorTransformationEnable, ColorTransformationValue, LightSourcePreset, SaturationMode, Saturation Load parameters (UserSetLoad): Load the vendor default configuration parameters or the user configuration parameters into the effective configuration parameters. After this operation is performed, the effective configuration parameters will be covered by the loaded parameters which are selected by the user, and the new effective configuration parameters are generated.
8.Features When using multi-cameras at the same time, it is necessary to ensure the uniqueness of the user-defined name of each camera, otherwise, an exception will occur when the camera is opened. 8.3.10. Binning The feature of Binning is to combine multiple pixels adjacent to each other in the sensor into a single value, and process the average value of multiple pixels or sum the multiple pixel values, which may increase the signal-to-noise ratio or the camera's response to light.
Page 88
8.Features Binning Factors Two types of Binning are available: horizontal Binning and vertical Binning. You can set the Binning factor in one or two directions. Horizontal Binning is the processing of pixels in adjacent rows. Vertical Binning is the processing of pixels in adjacent columns. Binning factor 1: Disable Binning.
8.Features 4) Mutually exclusive with Decimation Binning and Decimation cannot be used simultaneously in the same direction. When the horizontal Binning value is set to a value other than 1, the horizontal Decimation feature cannot be used. When the vertical Binning value is set to a value other than 1, the vertical Decimation feature cannot be used.
Page 90
8.Features Figure 8-52 Color camera horizontal Decimation Figure 8-51 Mono camera horizontal Decimation As a result, the image width is reduced. For example, enabling horizontal Decimation by 2 halves the image width. The camera automatically adjusts the image ROI settings. Horizontal Decimation does not (or only to a very small extent) increase the camera's frame rate.
8.Features 4) Mutually exclusive with Binning Decimation and Binning cannot be used simultaneously in the same direction. When the horizontal Decimation value is set to a value other than 1, the horizontal Binning feature cannot be used. When the vertical Decimation value is set to a value other than 1, the vertical Binning feature cannot be used. On some camera models, user can select to perform Sensor or FPGA decimation.
Page 92
8.Features Figure 8-57 The original image Figure 8-58 Reverse X and Y enabled Using Image ROI with Reverse X or Reverse Y If you have specified an image ROI while using Reverse X or Reverse Y, you must bear in mind that the position of the ROI relative to the sensor remains the same.
8.Features stop acquiring before setting up Reverse Y, and then set the Reverse Y option to true to enable the Reverse Y. So the camera will output the vertically flipped image after acquiring. 8.3.13. Acquisition Status The Acquisition Status feature can determine whether the camera is waiting for trigger signals. This is useful if you want to optimize triggered image acquisition and avoid over triggering.
Page 94
8.Features Model Features Set the switch to off Set the switch to on Exposure 14~1000000 14~60000000 Auto Exposure 14~1000000 14~1000000 Gain 0~16 0~24 Auto Gain 0~16 0~24 MARS-6500-31X2M/C MARS-6501-31X2M/C PGA Gain 0.75~6.0 0.75~6.0 Black Level -1023~1024 -1023~1024 White Balance component 0~15.998 0~31.998 Auto White Balance...
Page 95
8.Features Exposure 1~1000000 1~60000000 Auto Exposure 1~1000000 1~1000000 Gain 0~16 0~24 Auto Gain 0~16 0~24 MARS-513-940X2M/C PGA Gain 1~2.0 1~3.0 Black Level BPP12: 0~4095 BPP12: 0~4095 White Balance component 0~15.998 0~31.998 Auto White Balance 0~15.998 0~31.998 Exposure 1~1000000 1~60000000 Auto Exposure 1~1000000 1~1000000 Gain...
8.Features 8.3.17. Timer The camera only supports one timer (Timer1), which can be started by a specified event or signal (only ExposureStart signal is supported). The Timer can configure a timer output signal that goes high on a specific event or signal and goes low after a specific duration. And the timer is cleared when the output signal goes low.
8.Features 8.3.18. Counter The camera only supports one counter (Counter1), which can count the number of FrameTrigger, AcquisitionTrigger and FrameStart signals received by the camera. The counter starts counting from 0. You can select one of the above three signals to count by CounterEventSource. The FrameTrigger and AcquisitionTrigger signals of the counter statistics refer to the signals that have been triggered for filtering without a trigger delay.
8.Features Daylight-6500K When the user selects Daylight-6500K in the light source preset, the camera will perform white balance processing on the image by default, and it supports manually modify white balance coefficients or enable white balance. If the external environment light source used is D65 light source, the image will not produce color deviation.
Page 100
8.Features Figure 8-65 Color template The user can use a color template containing 24 colors and shoot this color template with a camera, the RGB value of each color may be different from the standard RGB value of the standard color template, the vendor can use the software or hardware to convert the RGB value that is read to the standard RGB value.
8.Features 8.4.4. Gamma The Gamma can optimize the brightness of acquired images for display on a monitor. 1) Prerequisites If the GammaEnable parameter is available, it must be set to true. 2) How it works The camera applies a Gamma correction value (γ) to the brightness value of each pixel according to the following formula (red pixel value (R) of a color camera shown as an example): γ...
8.Features 8.4.5. Lookup Table When the analog signal that is read out by the sensor has been converted via ADC, generally, the raw data bit depth is larger than 8 bits, there are 12 bits, 10 bits, etc. The feature of lookup table is to replace some pixel values in the 8 bits and 12 bits images by values defined by the user.
8.Features Figure 8-68 Before PRNU correction Figure 8-69 After PRNU correction PRNU correction supports 17 sets parameters (default and Set0~Set15). When user choose "default", the PRNU correction will be executed but PRNU correction values cannot be saved in the device. After clicking PRNULoad, the image will be restored to the uncalibrated state.
8.Features Figure 8-71 After DSNU correction DSNU correction supports 17 sets parameters (default and Set0~Set15). When user choose "default", the DSNU correction will be executed but DSNU correction values cannot be saved in the device. After clicking DSNULoad, the image will be restored to the uncalibrated state. When users choose " Set0~Set15", the DSNU correction values will be saved in the device through "DSNUSave"...
8.Features 8.4.9. Static Defect Pixel Correction Due to the technical defects of the image sensor, the camera has more or less defect pixels. Some of these defect pixels are fixed at the same gray value and do not change with the scene, which are called dead pixels.
8.Features Please use static defect pixel correction function in full frame mode for calibration. 8.4.10. Flat Field Correction During the use of the camera, there may be various inconsistencies in the image, which are mainly reflected in the following aspects: Inconsistent response of every individual pixels.
8.Features If the camera has saved the flat field correction parameters, it needs to load coefficients from non- volatile memory when powered on, which may cause the camera startup time become longer. During the coefficient loading period, the camera will constantly display an orange light and cannot be enumerated.
8.Features 2) Configuring saturation Enter the expected value for the Saturation parameter and the range is 0 to 128. By default, the parameter is set to 64 (no saturation perform) 3) How it works The saturation adjustment is performed by a 3×3 matrix. When the saturation intensity is modified, the saturation can be changed by modifying the adjustment matrix A.
Page 109
8.Features Figure 8-79 GUI Figure 8-80 Before sharpness adjustment Figure 8-81 After sharpness adjustment Sharpness adjustment Adjust the sharpness value can adjust the camera's sharpness to the image. The adjustment range is 0- 7.0. The larger the value, the higher the sharpness. ...
8.Features 8.4.13. Fixed Pattern Noise Correction As the process and internal structure of image sensors may cause streak and checker pattern problems. Fixed pattern noise correction is a processing algorithm based on this regular fixed pattern noise. This function removes streak and checker pattern while preserving the edges of the image, and improving the visual effect of the image.
8.Features Among them: ImageSize = (Width × Height×PixelSize)÷8+25+2×Height+PacketNum×32 : The camera's frame period, unit: μs. Width: The current image width. Height: The current image height. PixelSize: The size of the pixel, in 8bit mode, the value is 8, in 10bit mode, the value is 10, and in 10bit mode, the value is 12.
8.Features DeviceLinkThroughputLimit is less than the current device acquisition bandwidth, the current device acquisition bandwidth will be reduced to the limit of the DeviceLinkThroughputLimit. When the camera is working in trigger mode, the bandwidth limit will restrict the maximum trigger frequency. Example 1: The MARS-15200-16X2M/C-TF is working in continuous mode, the DeviceLinkCurrentThroughput is 2500000000Bps, the DeviceLinkThroughputLimit is 5000000000Bps, and then the DeviceLinkCurrentThroughput...
Page 113
8.Features MARS-10300-24X2M/C When the sensor bit depth is 8bit, the row period (unit: μs): =4.25 When the sensor bit depth is 12bit, the row period (unit: μs): =4.25 The camera acquisition time (unit: μs): = (Height ×VerticalBinning +12) × T ...
Page 114
8.Features The camera acquisition time (unit: μs): = (Height ×VerticalBinning +12) × T MARS-2625-150X2M/C \ MARS-2626-150X2M/C When the sensor bit depth is 8bit, the row period (unit: μs): =1.29 When the sensor bit depth is 10bit, the row period (unit: μs): =1.334 The camera acquisition time (unit: μs): = (Height ×VerticalBinning +12) ×...
8.Features 8.6. Sequencer The Sequencer feature allows you to define sets of parameter settings and apply them to a sequence of image acquisitions. As the camera acquires images, it applies one sequence set after the other, as shown below. Figure 8-87 Sequencer feature schematic diagram 8.6.1.
8.Features [SequencerSetSelector] Set the sequence set number. The range is determined by the camera model. [SequencerSetSave] Save parameters to the sequence set in "SequencerSetSelector" [SequencerSetLoad] Click "SequencerSetLoad", the values of sequence set parameters are overwritten and replaced by the values stored in the selected sequence set. [SequencerSetActive] When "SequencerMode"...
8.Features For example, we want the sequence sets to run in the order 0->1->2->0->1->2, the setting is as follows: Set the "SequencerSetSelector" to 2. Set the "SequencerSetNext" to 0. Before "SequencerMode" set to "On", the auto gain, auto exposure and auto white balance functions must be set to "Off".
8.Features [TECTargetTemperature] Sets the target temperature. Range: -10°C ~ +60°C (Default: 0°C). Change the value to adjust the target temperature, and the camera will accurately adjust the temperature according to the current temperature and the target temperature adjustment algorithm. [FanEnable] Set the parameter to "true", turn on the fan. Set the parameter to "false", turn off the fan. Turn on the fan will reduce the temperature of the camera.
9.1. LUT Create Tool 9.1.1. GUI LUT Create Tool, which supports all series of DAHENG IMAGING cameras. This plugin is integrated into GalaxyView.exe. After opening the device that you want to operate through this software, you can open LUT Create Tool from the menu bar plugin list. With the plugin you can achieve the following functions: Adjust the image Gamma, brightness, and contrast.
9.Software Tool [Save LUT] Write the currently generated LUT to device or save to LUT/CSV file. [Polyline Drawing Area] Represent the currently generated LUT in a curve form. 9.1.2. User Guide 9.1.2.1. User Case After you select "Select Base LUT" and adjust the LUT parameter to a satisfactory effect, if you want to save the currently set parameters and you want to restore the parameters after the camera is powered on again, you need to select "Write To Device".
9.Software Tool If the device does not support reading/writing LUT, or does not support LUT to be used on other terminal devices after adjusting LUT effect through this terminal, then you can use the "Save To File" function. After adjusting LUT, select "Save To File" and choose the save format as LUT. Then select the "LUT File" in Select Base LUT again and select the saved LUT file to restore the parameters.
Page 122
9.Software Tool Figure 9-5 Do not support LUT Figure 9-6 Single parameter group LUT When selecting "Read From Device", the polyline graph and image effects are updated to the lookup table in the device. When selecting the Standard LUT or Default and selecting "Write To Device", then when reading, the written parameters will be updated to the GUI.
9.Software Tool LUT file After selecting the "LUT file", a dialog box for selecting the file will pop up. You can select the file in the format of .lut, and update the polyline diagram and image acquisition effect of the device. If you select "Standard LUT"...
9.Software Tool After selecting the Select Base LUT, when the above parameters are modified, the generated LUT will be written to the device Flash in real time. At this time, the "Write To Device" is not selected. After the device is powered off and restarted, the modified parameters will be lost.
9.Software Tool Using the API interface: Read the .lut file through the ReadLUTFile interface in the GxIAPI library and DxImageProc library and parse it into lookup table format that can be set to the appropriate camera. The specific steps are as follows: Get the length of the lookup table.
1%. 9.3. Static Defect Correction Plugin Static Defect Correction Plugin support all series of DAHENG IMAGING digital camera. The plugin is integrated into GalaxyView.exe. After opening the device through GalaxyView, open the Static Defect Correction plugin from the menu bar plugin list.
9.Software Tool Save the defect pixel information to the device.(The camera which support Static Defect Correction) Save the defect pixel information to the file 9.3.1. GUI Figure 9-11 Static Defect Correction GUI After opening the device through GalaxyView.exe and opening the Static Defect Correction plugin, the initial state of the GUI is shown in Figure 9-11.
Page 128
9.Software Tool SavetoFile Save the defects information to a file Display the image. After counting the defect pixels and noise points, the Image display area location of the defect/noise pixels will be marked on the displayed image Merged pixel number Display the number of defects Change the color of merged pixels Manually mark the defects on the image...
9.Software Tool 9.3.2. User Guide 9.3.2.1. Static Defect Correction Steps Click the "Catch" to capture an image. For details, please see section 9.3.2.2 Set threshold to determine the range of defect pixels Check "Bright dark scene" or "Actual scene" to select the type of defect pixels Click "Count"...
The format of the defect pixel data file is ".dp" and ".csv", and the default save path is under the installation package directory: *\Daheng Imaging\GalaxySDK\Demo\Win64\resource\gxplugins\DefectPixelCorrection; When you need to use the SDK to implement the Static Defect Correction function, you can read the saved defect pixel data file and call the function of the image processing library: DxStaticDefectPixelCorrection to realize the Static Defect Correction of the image.
10.FAQ 10. FAQ General Question Answer Please connect the frame grabber to the camera in the following sequence. Card CH0 <<==>> Cam CH1 Use Matrox frame grabber and GalaxyView to Card CH1 <<==>> Cam CH2 read-write property of the camera. The blue Card CH2 <<==>>...
Page 132
10.FAQ General Question Answer Use Matrox frame grabber and GalaxyView to operate the camera to acquire. After clicking the Stop acquisition and turn off the camera before close button directly in the upper right corner, turning off GalaxyView. GalaxyView crashes. Use Euresys frame grabber and GalaxyView to Please open the camera with eGrabber Studio operate the camera to acquire.
Page 133
10.FAQ General Question Answer Check and modify the following configurations of the frame grabber: DeviceTapGeometry: Make sure it is the same as the camera, for example, 1X-2YE Abnormal image acquisition or 0 frames with for 2100 camera. Euresys frame grabber Image1streamID: Should be set to 1 if the value of DeviceTapGeometry is 1X-2YE LineWidth: Make sure it is the same as the...
11.Revision History 11. Revision History Version Changes Date V1.0.0 Initial release 2023-09-25 Add MARS-2100-230X2M/C-NF, MARS-2101-230X2M/C-NF Update the description of section 2.5, 3.2, 3.3, 4.2.1, 5, 7.3, 8.3.3, 8.3.5, 8.3.7, 8.3.12, 8.3.14.1, 8.5.3, 8.5.4, and 10. V1.0.1 Add section 8.3.2 Sensor Bit Depth 2024-05-11 Update the UI interface and usage description related to the software...
If you need to order products or inquire product information, please contact our sales: Tel: +86 10 8282 8878-8081 Email: isales@deheng-imaging.com 12.2. Contact Support If you have any questions in using DAHENG IMAGING products, please contact the experts on our support team: Tel: +86 10 8282 8878 Email: isupport@daheng-imaging.com...
Need help?
Do you have a question about the CoaXPress MARS-6500-31X2C-TF and is the answer not in the manual?
Questions and answers